Sep 10 04:48:25.737530 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 04:48:25.737567 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Sep 10 03:31:38 -00 2025 Sep 10 04:48:25.737578 kernel: KASLR enabled Sep 10 04:48:25.737583 kernel: efi: EFI v2.7 by EDK II Sep 10 04:48:25.737589 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 04:48:25.737594 kernel: random: crng init done Sep 10 04:48:25.737601 kernel: secureboot: Secure boot disabled Sep 10 04:48:25.737606 kernel: ACPI: Early table checksum verification disabled Sep 10 04:48:25.737612 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 04:48:25.737619 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 04:48:25.737625 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737630 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737636 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737642 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737649 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737656 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737662 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737668 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737674 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:48:25.737680 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 04:48:25.737685 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 04:48:25.737691 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:48:25.737697 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 04:48:25.737703 kernel: Zone ranges: Sep 10 04:48:25.737709 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:48:25.737716 kernel: DMA32 empty Sep 10 04:48:25.737722 kernel: Normal empty Sep 10 04:48:25.737728 kernel: Device empty Sep 10 04:48:25.737734 kernel: Movable zone start for each node Sep 10 04:48:25.737739 kernel: Early memory node ranges Sep 10 04:48:25.737745 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 04:48:25.737751 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 04:48:25.737757 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 04:48:25.737763 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 04:48:25.737769 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 04:48:25.737775 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 04:48:25.737781 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 04:48:25.737788 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 04:48:25.737793 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 04:48:25.737799 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 04:48:25.737815 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 04:48:25.737822 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 04:48:25.737829 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 04:48:25.737836 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:48:25.737843 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 04:48:25.737849 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 04:48:25.737856 kernel: psci: probing for conduit method from ACPI. Sep 10 04:48:25.737862 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 04:48:25.737868 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 04:48:25.737874 kernel: psci: Trusted OS migration not required Sep 10 04:48:25.737881 kernel: psci: SMC Calling Convention v1.1 Sep 10 04:48:25.737887 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 04:48:25.737894 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 04:48:25.737901 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 04:48:25.737908 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 04:48:25.737914 kernel: Detected PIPT I-cache on CPU0 Sep 10 04:48:25.737920 kernel: CPU features: detected: GIC system register CPU interface Sep 10 04:48:25.737927 kernel: CPU features: detected: Spectre-v4 Sep 10 04:48:25.737933 kernel: CPU features: detected: Spectre-BHB Sep 10 04:48:25.737939 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 04:48:25.737945 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 04:48:25.737952 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 04:48:25.737959 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 04:48:25.737965 kernel: alternatives: applying boot alternatives Sep 10 04:48:25.737972 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc557398806956d5b7cf8f58d9bd1545e6d9edee390c62eb4b21701fba26a284 Sep 10 04:48:25.737980 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 04:48:25.737987 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 04:48:25.737993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 04:48:25.737999 kernel: Fallback order for Node 0: 0 Sep 10 04:48:25.738006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 04:48:25.738012 kernel: Policy zone: DMA Sep 10 04:48:25.738018 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 04:48:25.738025 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 04:48:25.738031 kernel: software IO TLB: area num 4. Sep 10 04:48:25.738037 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 04:48:25.738043 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 04:48:25.738051 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 04:48:25.738058 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 04:48:25.738064 kernel: rcu: RCU event tracing is enabled. Sep 10 04:48:25.738071 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 04:48:25.738077 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 04:48:25.738084 kernel: Tracing variant of Tasks RCU enabled. Sep 10 04:48:25.738090 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 04:48:25.738097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 04:48:25.738103 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 04:48:25.738109 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 04:48:25.738116 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 04:48:25.738123 kernel: GICv3: 256 SPIs implemented Sep 10 04:48:25.738130 kernel: GICv3: 0 Extended SPIs implemented Sep 10 04:48:25.738136 kernel: Root IRQ handler: gic_handle_irq Sep 10 04:48:25.738142 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 04:48:25.738148 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 04:48:25.738155 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 04:48:25.738161 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 04:48:25.738167 kernel: ITS@0x0000000008080000: allocated 8192 Devices @44110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 04:48:25.738174 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @44120000 (flat, esz 8, psz 64K, shr 1) Sep 10 04:48:25.738180 kernel: GICv3: using LPI property table @0x0000000044130000 Sep 10 04:48:25.738186 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000044140000 Sep 10 04:48:25.738193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 04:48:25.738200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:48:25.738207 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 04:48:25.738213 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 04:48:25.738219 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 04:48:25.738226 kernel: arm-pv: using stolen time PV Sep 10 04:48:25.738233 kernel: Console: colour dummy device 80x25 Sep 10 04:48:25.738239 kernel: ACPI: Core revision 20240827 Sep 10 04:48:25.738246 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 04:48:25.738252 kernel: pid_max: default: 32768 minimum: 301 Sep 10 04:48:25.738259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 04:48:25.738266 kernel: landlock: Up and running. Sep 10 04:48:25.738273 kernel: SELinux: Initializing. Sep 10 04:48:25.738279 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 04:48:25.738286 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 04:48:25.738293 kernel: rcu: Hierarchical SRCU implementation. Sep 10 04:48:25.738299 kernel: rcu: Max phase no-delay instances is 400. Sep 10 04:48:25.738306 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 04:48:25.738312 kernel: Remapping and enabling EFI services. Sep 10 04:48:25.738319 kernel: smp: Bringing up secondary CPUs ... Sep 10 04:48:25.738331 kernel: Detected PIPT I-cache on CPU1 Sep 10 04:48:25.738338 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 04:48:25.738345 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000044150000 Sep 10 04:48:25.738353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:48:25.738360 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 04:48:25.738366 kernel: Detected PIPT I-cache on CPU2 Sep 10 04:48:25.738374 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 04:48:25.738381 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000044160000 Sep 10 04:48:25.738389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:48:25.738395 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 04:48:25.738402 kernel: Detected PIPT I-cache on CPU3 Sep 10 04:48:25.738409 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 04:48:25.738416 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000044170000 Sep 10 04:48:25.738423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:48:25.738429 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 04:48:25.738436 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 04:48:25.738443 kernel: SMP: Total of 4 processors activated. Sep 10 04:48:25.738451 kernel: CPU: All CPU(s) started at EL1 Sep 10 04:48:25.738458 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 04:48:25.738465 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 04:48:25.738472 kernel: CPU features: detected: Common not Private translations Sep 10 04:48:25.738479 kernel: CPU features: detected: CRC32 instructions Sep 10 04:48:25.738485 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 04:48:25.738492 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 04:48:25.738499 kernel: CPU features: detected: LSE atomic instructions Sep 10 04:48:25.738506 kernel: CPU features: detected: Privileged Access Never Sep 10 04:48:25.738514 kernel: CPU features: detected: RAS Extension Support Sep 10 04:48:25.738521 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 04:48:25.738528 kernel: alternatives: applying system-wide alternatives Sep 10 04:48:25.738535 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 04:48:25.738588 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 10 04:48:25.738596 kernel: devtmpfs: initialized Sep 10 04:48:25.738603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 04:48:25.738610 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 04:48:25.738617 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 04:48:25.738625 kernel: 0 pages in range for non-PLT usage Sep 10 04:48:25.738632 kernel: 508560 pages in range for PLT usage Sep 10 04:48:25.738639 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 04:48:25.738646 kernel: SMBIOS 3.0.0 present. Sep 10 04:48:25.738653 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 04:48:25.738659 kernel: DMI: Memory slots populated: 1/1 Sep 10 04:48:25.738666 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 04:48:25.738673 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 04:48:25.738680 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 04:48:25.738689 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 04:48:25.738695 kernel: audit: initializing netlink subsys (disabled) Sep 10 04:48:25.738702 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 04:48:25.738709 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 04:48:25.738716 kernel: cpuidle: using governor menu Sep 10 04:48:25.738723 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 04:48:25.738730 kernel: ASID allocator initialised with 32768 entries Sep 10 04:48:25.738737 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 04:48:25.738744 kernel: Serial: AMBA PL011 UART driver Sep 10 04:48:25.738752 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 04:48:25.738759 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 04:48:25.738766 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 04:48:25.738772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 04:48:25.738779 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 04:48:25.738786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 04:48:25.738793 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 04:48:25.738800 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 04:48:25.738812 kernel: ACPI: Added _OSI(Module Device) Sep 10 04:48:25.738821 kernel: ACPI: Added _OSI(Processor Device) Sep 10 04:48:25.738828 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 04:48:25.738834 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 04:48:25.738841 kernel: ACPI: Interpreter enabled Sep 10 04:48:25.738848 kernel: ACPI: Using GIC for interrupt routing Sep 10 04:48:25.738854 kernel: ACPI: MCFG table detected, 1 entries Sep 10 04:48:25.738861 kernel: ACPI: CPU0 has been hot-added Sep 10 04:48:25.738868 kernel: ACPI: CPU1 has been hot-added Sep 10 04:48:25.738875 kernel: ACPI: CPU2 has been hot-added Sep 10 04:48:25.738882 kernel: ACPI: CPU3 has been hot-added Sep 10 04:48:25.738890 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 04:48:25.738897 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 04:48:25.738904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 04:48:25.739027 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 04:48:25.739090 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 04:48:25.739147 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 04:48:25.739203 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 04:48:25.739260 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 04:48:25.739269 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 04:48:25.739276 kernel: PCI host bridge to bus 0000:00 Sep 10 04:48:25.739340 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 04:48:25.739392 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 04:48:25.739443 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 04:48:25.739494 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 04:48:25.739587 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 04:48:25.739657 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 04:48:25.739716 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 04:48:25.739774 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 04:48:25.739846 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 04:48:25.739907 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 04:48:25.739965 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 04:48:25.740026 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 04:48:25.740079 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 04:48:25.740130 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 04:48:25.740181 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 04:48:25.740190 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 04:48:25.740197 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 04:48:25.740204 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 04:48:25.740213 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 04:48:25.740220 kernel: iommu: Default domain type: Translated Sep 10 04:48:25.740226 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 04:48:25.740233 kernel: efivars: Registered efivars operations Sep 10 04:48:25.740240 kernel: vgaarb: loaded Sep 10 04:48:25.740247 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 04:48:25.740254 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 04:48:25.740261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 04:48:25.740268 kernel: pnp: PnP ACPI init Sep 10 04:48:25.740335 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 04:48:25.740345 kernel: pnp: PnP ACPI: found 1 devices Sep 10 04:48:25.740352 kernel: NET: Registered PF_INET protocol family Sep 10 04:48:25.740359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 04:48:25.740366 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 04:48:25.740373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 04:48:25.740380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 04:48:25.740387 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 04:48:25.740396 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 04:48:25.740403 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 04:48:25.740410 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 04:48:25.740416 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 04:48:25.740423 kernel: PCI: CLS 0 bytes, default 64 Sep 10 04:48:25.740430 kernel: kvm [1]: HYP mode not available Sep 10 04:48:25.740437 kernel: Initialise system trusted keyrings Sep 10 04:48:25.740444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 04:48:25.740450 kernel: Key type asymmetric registered Sep 10 04:48:25.740459 kernel: Asymmetric key parser 'x509' registered Sep 10 04:48:25.740466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 04:48:25.740473 kernel: io scheduler mq-deadline registered Sep 10 04:48:25.740479 kernel: io scheduler kyber registered Sep 10 04:48:25.740486 kernel: io scheduler bfq registered Sep 10 04:48:25.740493 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 04:48:25.740500 kernel: ACPI: button: Power Button [PWRB] Sep 10 04:48:25.740507 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 04:48:25.740586 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 04:48:25.740597 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 04:48:25.740604 kernel: thunder_xcv, ver 1.0 Sep 10 04:48:25.740611 kernel: thunder_bgx, ver 1.0 Sep 10 04:48:25.740618 kernel: nicpf, ver 1.0 Sep 10 04:48:25.740625 kernel: nicvf, ver 1.0 Sep 10 04:48:25.740692 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 04:48:25.740746 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T04:48:25 UTC (1757479705) Sep 10 04:48:25.740756 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 04:48:25.740765 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 04:48:25.740772 kernel: watchdog: NMI not fully supported Sep 10 04:48:25.740779 kernel: watchdog: Hard watchdog permanently disabled Sep 10 04:48:25.740785 kernel: NET: Registered PF_INET6 protocol family Sep 10 04:48:25.740792 kernel: Segment Routing with IPv6 Sep 10 04:48:25.740799 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 04:48:25.740813 kernel: NET: Registered PF_PACKET protocol family Sep 10 04:48:25.740820 kernel: Key type dns_resolver registered Sep 10 04:48:25.740827 kernel: registered taskstats version 1 Sep 10 04:48:25.740834 kernel: Loading compiled-in X.509 certificates Sep 10 04:48:25.740843 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: ff30d9664cb85a7bf5140c28bc9be2659edf9859' Sep 10 04:48:25.740850 kernel: Demotion targets for Node 0: null Sep 10 04:48:25.740857 kernel: Key type .fscrypt registered Sep 10 04:48:25.740863 kernel: Key type fscrypt-provisioning registered Sep 10 04:48:25.740870 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 04:48:25.740877 kernel: ima: Allocated hash algorithm: sha1 Sep 10 04:48:25.740884 kernel: ima: No architecture policies found Sep 10 04:48:25.740891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 04:48:25.740899 kernel: clk: Disabling unused clocks Sep 10 04:48:25.740906 kernel: PM: genpd: Disabling unused power domains Sep 10 04:48:25.740913 kernel: Warning: unable to open an initial console. Sep 10 04:48:25.740921 kernel: Freeing unused kernel memory: 38976K Sep 10 04:48:25.740928 kernel: Run /init as init process Sep 10 04:48:25.740934 kernel: with arguments: Sep 10 04:48:25.740941 kernel: /init Sep 10 04:48:25.740948 kernel: with environment: Sep 10 04:48:25.740955 kernel: HOME=/ Sep 10 04:48:25.740963 kernel: TERM=linux Sep 10 04:48:25.740970 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 04:48:25.740978 systemd[1]: Successfully made /usr/ read-only. Sep 10 04:48:25.740988 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 04:48:25.740996 systemd[1]: Detected virtualization kvm. Sep 10 04:48:25.741003 systemd[1]: Detected architecture arm64. Sep 10 04:48:25.741010 systemd[1]: Running in initrd. Sep 10 04:48:25.741017 systemd[1]: No hostname configured, using default hostname. Sep 10 04:48:25.741026 systemd[1]: Hostname set to . Sep 10 04:48:25.741033 systemd[1]: Initializing machine ID from VM UUID. Sep 10 04:48:25.741040 systemd[1]: Queued start job for default target initrd.target. Sep 10 04:48:25.741047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:48:25.741055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:48:25.741063 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 04:48:25.741071 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 04:48:25.741082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 04:48:25.741092 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 04:48:25.741100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 04:48:25.741108 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 04:48:25.741115 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:48:25.741123 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:48:25.741130 systemd[1]: Reached target paths.target - Path Units. Sep 10 04:48:25.741139 systemd[1]: Reached target slices.target - Slice Units. Sep 10 04:48:25.741146 systemd[1]: Reached target swap.target - Swaps. Sep 10 04:48:25.741154 systemd[1]: Reached target timers.target - Timer Units. Sep 10 04:48:25.741161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 04:48:25.741168 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 04:48:25.741176 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 04:48:25.741183 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 04:48:25.741191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:48:25.741198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 04:48:25.741207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:48:25.741214 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 04:48:25.741222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 04:48:25.741230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 04:48:25.741237 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 04:48:25.741245 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 04:48:25.741252 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 04:48:25.741260 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 04:48:25.741267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 04:48:25.741276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:48:25.741283 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 04:48:25.741291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:48:25.741299 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 04:48:25.741307 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 04:48:25.741330 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 04:48:25.741347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:48:25.741355 systemd-journald[244]: Journal started Sep 10 04:48:25.741374 systemd-journald[244]: Runtime Journal (/run/log/journal/233ad5940d264175882ce971ebb03653) is 6M, max 48.5M, 42.4M free. Sep 10 04:48:25.731823 systemd-modules-load[246]: Inserted module 'overlay' Sep 10 04:48:25.744381 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 04:48:25.744867 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 10 04:48:25.746877 kernel: Bridge firewalling registered Sep 10 04:48:25.746893 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 04:48:25.747857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 04:48:25.748792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 04:48:25.753909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 04:48:25.755284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 04:48:25.756871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 04:48:25.771650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 04:48:25.777334 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:48:25.778526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:48:25.779282 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 04:48:25.781821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:48:25.784857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 04:48:25.786491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 04:48:25.788299 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 04:48:25.809224 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc557398806956d5b7cf8f58d9bd1545e6d9edee390c62eb4b21701fba26a284 Sep 10 04:48:25.822371 systemd-resolved[287]: Positive Trust Anchors: Sep 10 04:48:25.822389 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 04:48:25.822419 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 04:48:25.827157 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 10 04:48:25.828021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 04:48:25.830275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:48:25.873574 kernel: SCSI subsystem initialized Sep 10 04:48:25.878556 kernel: Loading iSCSI transport class v2.0-870. Sep 10 04:48:25.885587 kernel: iscsi: registered transport (tcp) Sep 10 04:48:25.897563 kernel: iscsi: registered transport (qla4xxx) Sep 10 04:48:25.897587 kernel: QLogic iSCSI HBA Driver Sep 10 04:48:25.912773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 04:48:25.926591 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:48:25.927899 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 04:48:25.973441 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 04:48:25.975196 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 04:48:26.036567 kernel: raid6: neonx8 gen() 15665 MB/s Sep 10 04:48:26.053560 kernel: raid6: neonx4 gen() 15761 MB/s Sep 10 04:48:26.070553 kernel: raid6: neonx2 gen() 13146 MB/s Sep 10 04:48:26.087550 kernel: raid6: neonx1 gen() 10375 MB/s Sep 10 04:48:26.104561 kernel: raid6: int64x8 gen() 6843 MB/s Sep 10 04:48:26.121564 kernel: raid6: int64x4 gen() 7316 MB/s Sep 10 04:48:26.138553 kernel: raid6: int64x2 gen() 6077 MB/s Sep 10 04:48:26.155560 kernel: raid6: int64x1 gen() 5031 MB/s Sep 10 04:48:26.155581 kernel: raid6: using algorithm neonx4 gen() 15761 MB/s Sep 10 04:48:26.172576 kernel: raid6: .... xor() 12219 MB/s, rmw enabled Sep 10 04:48:26.172604 kernel: raid6: using neon recovery algorithm Sep 10 04:48:26.177562 kernel: xor: measuring software checksum speed Sep 10 04:48:26.177591 kernel: 8regs : 21636 MB/sec Sep 10 04:48:26.179120 kernel: 32regs : 20202 MB/sec Sep 10 04:48:26.179132 kernel: arm64_neon : 28234 MB/sec Sep 10 04:48:26.179141 kernel: xor: using function: arm64_neon (28234 MB/sec) Sep 10 04:48:26.230579 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 04:48:26.236365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 04:48:26.238616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:48:26.266114 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 04:48:26.270165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:48:26.271818 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 04:48:26.302522 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 10 04:48:26.322779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 04:48:26.326666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 04:48:26.378153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:48:26.380278 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 04:48:26.432564 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 04:48:26.433578 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 04:48:26.435957 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 04:48:26.435990 kernel: GPT:9289727 != 19775487 Sep 10 04:48:26.436000 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 04:48:26.436009 kernel: GPT:9289727 != 19775487 Sep 10 04:48:26.436523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 04:48:26.438274 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 04:48:26.438290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:48:26.436661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:48:26.441650 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:48:26.446058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:48:26.468608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 04:48:26.469695 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 04:48:26.471306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:48:26.478692 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 04:48:26.490565 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 04:48:26.491462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 04:48:26.499310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 04:48:26.500387 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 04:48:26.502093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:48:26.503637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 04:48:26.505914 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 04:48:26.507361 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 04:48:26.525576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:48:26.525860 disk-uuid[591]: Primary Header is updated. Sep 10 04:48:26.525860 disk-uuid[591]: Secondary Entries is updated. Sep 10 04:48:26.525860 disk-uuid[591]: Secondary Header is updated. Sep 10 04:48:26.526334 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 04:48:27.538574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:48:27.539956 disk-uuid[598]: The operation has completed successfully. Sep 10 04:48:27.565194 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 04:48:27.565310 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 04:48:27.588409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 04:48:27.614434 sh[612]: Success Sep 10 04:48:27.627002 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 04:48:27.627065 kernel: device-mapper: uevent: version 1.0.3 Sep 10 04:48:27.627086 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 04:48:27.633581 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 04:48:27.656890 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 04:48:27.658447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 04:48:27.664304 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 04:48:27.670284 kernel: BTRFS: device fsid e05b724d-13f9-4a54-9e36-10f4d8a13534 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Sep 10 04:48:27.670312 kernel: BTRFS info (device dm-0): first mount of filesystem e05b724d-13f9-4a54-9e36-10f4d8a13534 Sep 10 04:48:27.670322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:48:27.674748 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 04:48:27.674767 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 04:48:27.675700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 04:48:27.676659 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 04:48:27.677776 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 04:48:27.678440 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 04:48:27.681075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 04:48:27.705206 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 10 04:48:27.705243 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:48:27.705253 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:48:27.707827 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:48:27.707856 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:48:27.712550 kernel: BTRFS info (device vda6): last unmount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:48:27.712699 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 04:48:27.714674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 04:48:27.780127 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 04:48:27.784159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 04:48:27.818377 ignition[696]: Ignition 2.22.0 Sep 10 04:48:27.818393 ignition[696]: Stage: fetch-offline Sep 10 04:48:27.818429 ignition[696]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:27.818437 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:27.818509 ignition[696]: parsed url from cmdline: "" Sep 10 04:48:27.818512 ignition[696]: no config URL provided Sep 10 04:48:27.818516 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 04:48:27.818522 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 10 04:48:27.818552 ignition[696]: op(1): [started] loading QEMU firmware config module Sep 10 04:48:27.818556 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 04:48:27.824700 ignition[696]: op(1): [finished] loading QEMU firmware config module Sep 10 04:48:27.826385 systemd-networkd[804]: lo: Link UP Sep 10 04:48:27.826400 systemd-networkd[804]: lo: Gained carrier Sep 10 04:48:27.827093 systemd-networkd[804]: Enumeration completed Sep 10 04:48:27.827190 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 04:48:27.827462 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:48:27.827466 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 04:48:27.828163 systemd-networkd[804]: eth0: Link UP Sep 10 04:48:27.828246 systemd-networkd[804]: eth0: Gained carrier Sep 10 04:48:27.828254 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:48:27.828862 systemd[1]: Reached target network.target - Network. Sep 10 04:48:27.846586 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 04:48:27.875046 ignition[696]: parsing config with SHA512: 2fecf908101077cdd4be43153b6a04ec497a9be944c598bb2ad0b6bc6bdee74dd693c5f205c070b1b0c1ee5613209f90d6babb7d6a4db894ae538673c410fb65 Sep 10 04:48:27.880259 unknown[696]: fetched base config from "system" Sep 10 04:48:27.880270 unknown[696]: fetched user config from "qemu" Sep 10 04:48:27.880641 ignition[696]: fetch-offline: fetch-offline passed Sep 10 04:48:27.880693 ignition[696]: Ignition finished successfully Sep 10 04:48:27.882441 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 04:48:27.883784 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 04:48:27.884480 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 04:48:27.919111 ignition[814]: Ignition 2.22.0 Sep 10 04:48:27.919128 ignition[814]: Stage: kargs Sep 10 04:48:27.919260 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:27.919268 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:27.920063 ignition[814]: kargs: kargs passed Sep 10 04:48:27.920111 ignition[814]: Ignition finished successfully Sep 10 04:48:27.922623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 04:48:27.924316 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 04:48:27.953590 ignition[822]: Ignition 2.22.0 Sep 10 04:48:27.953604 ignition[822]: Stage: disks Sep 10 04:48:27.953725 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:27.953734 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:27.954479 ignition[822]: disks: disks passed Sep 10 04:48:27.954521 ignition[822]: Ignition finished successfully Sep 10 04:48:27.958148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 04:48:27.963662 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 04:48:27.964769 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 04:48:27.967852 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 04:48:27.968734 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 04:48:27.972444 systemd[1]: Reached target basic.target - Basic System. Sep 10 04:48:27.974898 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 04:48:28.005358 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 04:48:28.009657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 04:48:28.011670 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 04:48:28.084866 kernel: EXT4-fs (vda9): mounted filesystem dd8d03e8-6691-4f3c-8fb2-6f7ae674fb2f r/w with ordered data mode. Quota mode: none. Sep 10 04:48:28.085788 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 04:48:28.087419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 04:48:28.090169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 04:48:28.092219 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 04:48:28.093078 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 04:48:28.093116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 04:48:28.093136 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 04:48:28.102984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 04:48:28.105041 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 04:48:28.110573 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 10 04:48:28.112714 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:48:28.112767 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:48:28.115584 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:48:28.115610 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:48:28.116812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 04:48:28.143136 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 04:48:28.147632 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 10 04:48:28.152139 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 04:48:28.159808 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 04:48:28.231416 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 04:48:28.233451 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 04:48:28.234911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 04:48:28.254583 kernel: BTRFS info (device vda6): last unmount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:48:28.268974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 04:48:28.282871 ignition[952]: INFO : Ignition 2.22.0 Sep 10 04:48:28.282871 ignition[952]: INFO : Stage: mount Sep 10 04:48:28.284260 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:28.284260 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:28.284260 ignition[952]: INFO : mount: mount passed Sep 10 04:48:28.284260 ignition[952]: INFO : Ignition finished successfully Sep 10 04:48:28.285878 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 04:48:28.287628 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 04:48:28.801988 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 04:48:28.803481 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 04:48:28.822556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 10 04:48:28.824556 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:48:28.824589 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:48:28.826616 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:48:28.826631 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:48:28.827892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 04:48:28.857102 ignition[983]: INFO : Ignition 2.22.0 Sep 10 04:48:28.857102 ignition[983]: INFO : Stage: files Sep 10 04:48:28.858606 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:28.858606 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:28.858606 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 10 04:48:28.861627 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 04:48:28.861627 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 04:48:28.861627 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 04:48:28.861627 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 04:48:28.861627 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 04:48:28.860685 unknown[983]: wrote ssh authorized keys file for user: core Sep 10 04:48:28.867871 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 04:48:28.867871 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 10 04:48:28.894222 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 04:48:29.074604 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 04:48:29.085375 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 10 04:48:29.173688 systemd-networkd[804]: eth0: Gained IPv6LL Sep 10 04:48:29.429756 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 04:48:29.840244 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 04:48:29.840244 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 04:48:29.843887 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 04:48:29.843887 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 04:48:29.843887 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 04:48:29.843887 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 04:48:29.843887 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 04:48:29.852142 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 04:48:29.852142 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 04:48:29.852142 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 04:48:29.858024 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 04:48:29.861229 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 04:48:29.863982 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 04:48:29.863982 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 04:48:29.863982 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 04:48:29.863982 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 04:48:29.863982 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 04:48:29.863982 ignition[983]: INFO : files: files passed Sep 10 04:48:29.863982 ignition[983]: INFO : Ignition finished successfully Sep 10 04:48:29.864694 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 04:48:29.867043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 04:48:29.871773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 04:48:29.885316 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 04:48:29.886558 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 04:48:29.888537 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:48:29.888537 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:48:29.886577 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 04:48:29.891911 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:48:29.891744 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 04:48:29.893020 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 04:48:29.895228 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 04:48:29.939397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 04:48:29.939496 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 04:48:29.941289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 04:48:29.942793 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 04:48:29.944335 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 04:48:29.945006 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 04:48:29.968175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 04:48:29.970200 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 04:48:29.994694 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:48:29.996432 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:48:29.998522 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 04:48:29.999280 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 04:48:29.999383 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 04:48:30.001275 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 04:48:30.002939 systemd[1]: Stopped target basic.target - Basic System. Sep 10 04:48:30.004228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 04:48:30.005514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 04:48:30.007066 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 04:48:30.008495 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 04:48:30.010136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 04:48:30.011493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 04:48:30.013065 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 04:48:30.014535 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 04:48:30.015983 systemd[1]: Stopped target swap.target - Swaps. Sep 10 04:48:30.017157 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 04:48:30.017261 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 04:48:30.019119 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:48:30.020528 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:48:30.022103 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 04:48:30.023553 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:48:30.025450 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 04:48:30.025575 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 04:48:30.027736 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 04:48:30.027887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 04:48:30.029444 systemd[1]: Stopped target paths.target - Path Units. Sep 10 04:48:30.030804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 04:48:30.035605 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:48:30.037520 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 04:48:30.038236 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 04:48:30.039535 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 04:48:30.039623 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 04:48:30.040844 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 04:48:30.040915 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 04:48:30.042285 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 04:48:30.042394 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 04:48:30.043649 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 04:48:30.043738 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 04:48:30.045632 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 04:48:30.047047 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 04:48:30.047160 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:48:30.063874 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 04:48:30.064655 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 04:48:30.064783 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:48:30.066279 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 04:48:30.066368 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 04:48:30.071218 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 04:48:30.071303 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 04:48:30.078936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 04:48:30.079693 ignition[1039]: INFO : Ignition 2.22.0 Sep 10 04:48:30.079693 ignition[1039]: INFO : Stage: umount Sep 10 04:48:30.079693 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:48:30.079693 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:48:30.084467 ignition[1039]: INFO : umount: umount passed Sep 10 04:48:30.084467 ignition[1039]: INFO : Ignition finished successfully Sep 10 04:48:30.083099 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 04:48:30.084605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 04:48:30.086053 systemd[1]: Stopped target network.target - Network. Sep 10 04:48:30.087243 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 04:48:30.087298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 04:48:30.088729 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 04:48:30.088770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 04:48:30.090039 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 04:48:30.090081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 04:48:30.091301 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 04:48:30.091336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 04:48:30.092928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 04:48:30.095664 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 04:48:30.104670 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 04:48:30.105433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 04:48:30.108723 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 04:48:30.108946 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 04:48:30.108981 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:48:30.111759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 04:48:30.116120 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 04:48:30.116248 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 04:48:30.119018 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 04:48:30.119178 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 04:48:30.120686 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 04:48:30.120715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:48:30.122818 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 04:48:30.124291 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 04:48:30.124339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 04:48:30.125878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 04:48:30.125913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:48:30.128269 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 04:48:30.128308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 04:48:30.129752 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:48:30.134061 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 04:48:30.143973 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 04:48:30.144836 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:48:30.145963 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 04:48:30.146043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 04:48:30.147759 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 04:48:30.147832 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 04:48:30.148741 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 04:48:30.148768 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:48:30.150093 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 04:48:30.150132 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 04:48:30.152294 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 04:48:30.152338 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 04:48:30.154327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 04:48:30.154370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 04:48:30.157153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 04:48:30.158023 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 04:48:30.158088 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:48:30.160298 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 04:48:30.160334 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:48:30.162997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 04:48:30.163038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:48:30.172994 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 04:48:30.173099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 04:48:30.193769 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 04:48:30.194575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 04:48:30.195479 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 04:48:30.196341 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 04:48:30.196397 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 04:48:30.198492 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 04:48:30.217996 systemd[1]: Switching root. Sep 10 04:48:30.245338 systemd-journald[244]: Journal stopped Sep 10 04:48:30.946479 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 04:48:30.946528 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 04:48:30.946572 kernel: SELinux: policy capability open_perms=1 Sep 10 04:48:30.946583 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 04:48:30.946592 kernel: SELinux: policy capability always_check_network=0 Sep 10 04:48:30.946601 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 04:48:30.946611 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 04:48:30.946621 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 04:48:30.946630 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 04:48:30.946639 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 04:48:30.946648 kernel: audit: type=1403 audit(1757479710.410:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 04:48:30.946661 systemd[1]: Successfully loaded SELinux policy in 52.705ms. Sep 10 04:48:30.946677 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.157ms. Sep 10 04:48:30.946688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 04:48:30.946698 systemd[1]: Detected virtualization kvm. Sep 10 04:48:30.946709 systemd[1]: Detected architecture arm64. Sep 10 04:48:30.946719 systemd[1]: Detected first boot. Sep 10 04:48:30.946729 systemd[1]: Initializing machine ID from VM UUID. Sep 10 04:48:30.946740 zram_generator::config[1086]: No configuration found. Sep 10 04:48:30.946750 kernel: NET: Registered PF_VSOCK protocol family Sep 10 04:48:30.946759 systemd[1]: Populated /etc with preset unit settings. Sep 10 04:48:30.946770 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 04:48:30.946780 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 04:48:30.946800 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 04:48:30.946813 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 04:48:30.946823 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 04:48:30.946833 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 04:48:30.946843 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 04:48:30.946853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 04:48:30.946863 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 04:48:30.946873 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 04:48:30.946883 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 04:48:30.946892 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 04:48:30.946905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:48:30.946919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:48:30.946929 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 04:48:30.946939 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 04:48:30.946949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 04:48:30.946958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 04:48:30.946968 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 04:48:30.946978 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:48:30.946990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:48:30.947000 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 04:48:30.947009 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 04:48:30.947023 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 04:48:30.947033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 04:48:30.947042 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:48:30.947052 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 04:48:30.947063 systemd[1]: Reached target slices.target - Slice Units. Sep 10 04:48:30.947073 systemd[1]: Reached target swap.target - Swaps. Sep 10 04:48:30.947084 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 04:48:30.947094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 04:48:30.947104 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 04:48:30.947113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:48:30.947123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 04:48:30.947133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:48:30.947143 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 04:48:30.947154 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 04:48:30.947164 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 04:48:30.947174 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 04:48:30.947184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 04:48:30.947194 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 04:48:30.947203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 04:48:30.947214 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 04:48:30.947223 systemd[1]: Reached target machines.target - Containers. Sep 10 04:48:30.947233 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 04:48:30.947243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:48:30.947254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 04:48:30.947264 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 04:48:30.947275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:48:30.947284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 04:48:30.947294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:48:30.947304 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 04:48:30.947314 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:48:30.947324 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 04:48:30.947334 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 04:48:30.947345 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 04:48:30.947354 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 04:48:30.947364 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 04:48:30.947376 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:48:30.947386 kernel: fuse: init (API version 7.41) Sep 10 04:48:30.947395 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 04:48:30.947405 kernel: loop: module loaded Sep 10 04:48:30.947413 kernel: ACPI: bus type drm_connector registered Sep 10 04:48:30.947424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 04:48:30.947434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 04:48:30.947444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 04:48:30.947454 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 04:48:30.947464 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 04:48:30.947475 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 04:48:30.947485 systemd[1]: Stopped verity-setup.service. Sep 10 04:48:30.947494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 04:48:30.947504 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 04:48:30.947513 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 04:48:30.947523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 04:48:30.947533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 04:48:30.947573 systemd-journald[1154]: Collecting audit messages is disabled. Sep 10 04:48:30.947597 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 04:48:30.947607 systemd-journald[1154]: Journal started Sep 10 04:48:30.947626 systemd-journald[1154]: Runtime Journal (/run/log/journal/233ad5940d264175882ce971ebb03653) is 6M, max 48.5M, 42.4M free. Sep 10 04:48:30.749815 systemd[1]: Queued start job for default target multi-user.target. Sep 10 04:48:30.775441 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 04:48:30.775802 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 04:48:30.949243 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 04:48:30.949989 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 04:48:30.951174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:48:30.952374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 04:48:30.952527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 04:48:30.953664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:48:30.953824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:48:30.954880 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 04:48:30.955025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 04:48:30.956087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:48:30.956251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:48:30.957522 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 04:48:30.957687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 04:48:30.958989 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:48:30.959133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:48:30.960260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 04:48:30.961511 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:48:30.962728 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 04:48:30.963911 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 04:48:30.974669 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 04:48:30.976904 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 04:48:30.978730 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 04:48:30.979597 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 04:48:30.979625 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 04:48:30.981197 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 04:48:30.988289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 04:48:30.989295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:48:30.990682 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 04:48:30.992569 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 04:48:30.993609 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 04:48:30.995671 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 04:48:30.996831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 04:48:30.997650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 04:48:31.004119 systemd-journald[1154]: Time spent on flushing to /var/log/journal/233ad5940d264175882ce971ebb03653 is 17.505ms for 883 entries. Sep 10 04:48:31.004119 systemd-journald[1154]: System Journal (/var/log/journal/233ad5940d264175882ce971ebb03653) is 8M, max 195.6M, 187.6M free. Sep 10 04:48:31.028973 systemd-journald[1154]: Received client request to flush runtime journal. Sep 10 04:48:31.029015 kernel: loop0: detected capacity change from 0 to 203944 Sep 10 04:48:31.000726 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 04:48:31.003340 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 04:48:31.012679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:48:31.013908 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 04:48:31.014903 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 04:48:31.025907 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 04:48:31.027258 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 04:48:31.029503 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 04:48:31.031757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:48:31.034924 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 04:48:31.037569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 04:48:31.043576 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 04:48:31.046233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 04:48:31.055572 kernel: loop1: detected capacity change from 0 to 100632 Sep 10 04:48:31.055820 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 04:48:31.067825 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 04:48:31.068076 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 04:48:31.071131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:48:31.077570 kernel: loop2: detected capacity change from 0 to 119368 Sep 10 04:48:31.095577 kernel: loop3: detected capacity change from 0 to 203944 Sep 10 04:48:31.105566 kernel: loop4: detected capacity change from 0 to 100632 Sep 10 04:48:31.110557 kernel: loop5: detected capacity change from 0 to 119368 Sep 10 04:48:31.113706 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 04:48:31.114073 (sd-merge)[1226]: Merged extensions into '/usr'. Sep 10 04:48:31.118121 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 04:48:31.118137 systemd[1]: Reloading... Sep 10 04:48:31.173627 zram_generator::config[1249]: No configuration found. Sep 10 04:48:31.247517 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 04:48:31.311779 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 04:48:31.311960 systemd[1]: Reloading finished in 193 ms. Sep 10 04:48:31.340887 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 04:48:31.342144 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 04:48:31.356702 systemd[1]: Starting ensure-sysext.service... Sep 10 04:48:31.358206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 04:48:31.366633 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 10 04:48:31.366646 systemd[1]: Reloading... Sep 10 04:48:31.372856 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 04:48:31.372880 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 04:48:31.373095 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 04:48:31.373272 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 04:48:31.374252 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 04:48:31.374577 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 10 04:48:31.374714 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 10 04:48:31.381356 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 04:48:31.381461 systemd-tmpfiles[1287]: Skipping /boot Sep 10 04:48:31.388413 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 04:48:31.388430 systemd-tmpfiles[1287]: Skipping /boot Sep 10 04:48:31.411562 zram_generator::config[1314]: No configuration found. Sep 10 04:48:31.542208 systemd[1]: Reloading finished in 175 ms. Sep 10 04:48:31.551625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 04:48:31.556653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:48:31.563591 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 04:48:31.565742 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 04:48:31.567630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 04:48:31.572673 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 04:48:31.574705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:48:31.576747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 04:48:31.583196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:48:31.588242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:48:31.590881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:48:31.593862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:48:31.594913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:48:31.595025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:48:31.595950 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 04:48:31.600206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:48:31.600341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:48:31.601855 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:48:31.601983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:48:31.603148 augenrules[1378]: No rules Sep 10 04:48:31.603878 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 04:48:31.607735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 04:48:31.609214 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 04:48:31.610591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:48:31.610717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:48:31.620751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:48:31.621775 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Sep 10 04:48:31.622109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:48:31.623977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:48:31.633742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:48:31.634665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:48:31.634769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:48:31.635757 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 04:48:31.638620 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 04:48:31.639634 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 04:48:31.640914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:48:31.643551 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 04:48:31.644911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:48:31.646568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:48:31.648083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:48:31.648486 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:48:31.650187 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:48:31.651582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:48:31.653348 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 04:48:31.667454 systemd[1]: Finished ensure-sysext.service. Sep 10 04:48:31.676499 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 04:48:31.677407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:48:31.679738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:48:31.681537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 04:48:31.687123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:48:31.692143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:48:31.693282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:48:31.693323 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:48:31.695671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 04:48:31.697998 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 04:48:31.699615 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 04:48:31.700047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:48:31.703026 augenrules[1427]: /sbin/augenrules: No change Sep 10 04:48:31.708762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:48:31.710823 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 04:48:31.713562 augenrules[1450]: No rules Sep 10 04:48:31.712572 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 04:48:31.714940 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 04:48:31.715095 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 04:48:31.716171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:48:31.716309 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:48:31.719890 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:48:31.720027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:48:31.727296 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 04:48:31.735562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 04:48:31.735634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 04:48:31.742869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 04:48:31.770102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 04:48:31.772919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 04:48:31.791910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 04:48:31.825297 systemd-networkd[1438]: lo: Link UP Sep 10 04:48:31.825305 systemd-networkd[1438]: lo: Gained carrier Sep 10 04:48:31.826102 systemd-networkd[1438]: Enumeration completed Sep 10 04:48:31.826273 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 04:48:31.826502 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:48:31.826512 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 04:48:31.826951 systemd-networkd[1438]: eth0: Link UP Sep 10 04:48:31.827188 systemd-networkd[1438]: eth0: Gained carrier Sep 10 04:48:31.827203 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:48:31.828914 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 04:48:31.831403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 04:48:31.832481 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 04:48:31.833926 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 04:48:31.834003 systemd-resolved[1353]: Positive Trust Anchors: Sep 10 04:48:31.834018 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 04:48:31.834050 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 04:48:31.840268 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 10 04:48:31.840596 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 04:48:31.841184 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Sep 10 04:48:31.842017 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 04:48:31.842064 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-09-10 04:48:31.817545 UTC. Sep 10 04:48:31.842473 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 04:48:31.843572 systemd[1]: Reached target network.target - Network. Sep 10 04:48:31.844658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:48:31.845938 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 04:48:31.847164 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 04:48:31.848458 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 04:48:31.849747 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 04:48:31.850809 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 04:48:31.851825 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 04:48:31.853929 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 04:48:31.853962 systemd[1]: Reached target paths.target - Path Units. Sep 10 04:48:31.854750 systemd[1]: Reached target timers.target - Timer Units. Sep 10 04:48:31.856174 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 04:48:31.858160 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 04:48:31.860634 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 04:48:31.862860 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 04:48:31.863761 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 04:48:31.884806 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 04:48:31.885842 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 04:48:31.889383 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 04:48:31.890717 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 04:48:31.899443 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 04:48:31.900375 systemd[1]: Reached target basic.target - Basic System. Sep 10 04:48:31.901242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 04:48:31.901270 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 04:48:31.902143 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 04:48:31.903907 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 04:48:31.905532 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 04:48:31.916320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 04:48:31.918128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 04:48:31.919116 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 04:48:31.921671 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 04:48:31.923068 jq[1502]: false Sep 10 04:48:31.923423 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 04:48:31.925390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 04:48:31.929930 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 04:48:31.930720 extend-filesystems[1503]: Found /dev/vda6 Sep 10 04:48:31.934410 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 04:48:31.935653 extend-filesystems[1503]: Found /dev/vda9 Sep 10 04:48:31.937486 extend-filesystems[1503]: Checking size of /dev/vda9 Sep 10 04:48:31.939817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:48:31.941449 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 04:48:31.941885 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 04:48:31.942309 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 04:48:31.944455 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 04:48:31.950171 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 04:48:31.951503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 04:48:31.953056 extend-filesystems[1503]: Resized partition /dev/vda9 Sep 10 04:48:31.955794 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 04:48:31.956092 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 04:48:31.956342 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 04:48:31.956503 extend-filesystems[1530]: resize2fs 1.47.3 (8-Jul-2025) Sep 10 04:48:31.963016 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 04:48:31.958810 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 04:48:31.963094 jq[1524]: true Sep 10 04:48:31.960578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 04:48:31.979767 jq[1534]: true Sep 10 04:48:31.980970 update_engine[1522]: I20250910 04:48:31.980696 1522 main.cc:92] Flatcar Update Engine starting Sep 10 04:48:31.994761 dbus-daemon[1500]: [system] SELinux support is enabled Sep 10 04:48:31.995975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 04:48:31.999763 update_engine[1522]: I20250910 04:48:31.998688 1522 update_check_scheduler.cc:74] Next update check in 7m57s Sep 10 04:48:32.000106 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 04:48:32.000675 tar[1533]: linux-arm64/helm Sep 10 04:48:32.000131 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 04:48:32.001551 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 04:48:32.001572 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 04:48:32.003567 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 04:48:32.004098 systemd[1]: Started update-engine.service - Update Engine. Sep 10 04:48:32.008151 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 04:48:32.019903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 04:48:32.021914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:48:32.025667 extend-filesystems[1530]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 04:48:32.025667 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 04:48:32.025667 extend-filesystems[1530]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 04:48:32.029402 extend-filesystems[1503]: Resized filesystem in /dev/vda9 Sep 10 04:48:32.026177 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 04:48:32.026626 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 04:48:32.031592 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 04:48:32.032194 systemd-logind[1516]: New seat seat0. Sep 10 04:48:32.036848 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 04:48:32.050493 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Sep 10 04:48:32.055839 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 04:48:32.057483 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 04:48:32.080365 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 04:48:32.160475 containerd[1545]: time="2025-09-10T04:48:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 04:48:32.161879 containerd[1545]: time="2025-09-10T04:48:32.161833800Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 04:48:32.177018 containerd[1545]: time="2025-09-10T04:48:32.176944672Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.185µs" Sep 10 04:48:32.177018 containerd[1545]: time="2025-09-10T04:48:32.176978500Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 04:48:32.177018 containerd[1545]: time="2025-09-10T04:48:32.176996313Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 04:48:32.177153 containerd[1545]: time="2025-09-10T04:48:32.177138496Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 04:48:32.177172 containerd[1545]: time="2025-09-10T04:48:32.177159384Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 04:48:32.177209 containerd[1545]: time="2025-09-10T04:48:32.177183227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177254 containerd[1545]: time="2025-09-10T04:48:32.177229437Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177254 containerd[1545]: time="2025-09-10T04:48:32.177247329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177496 containerd[1545]: time="2025-09-10T04:48:32.177456969Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177496 containerd[1545]: time="2025-09-10T04:48:32.177480733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177496 containerd[1545]: time="2025-09-10T04:48:32.177492475Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 04:48:32.177585 containerd[1545]: time="2025-09-10T04:48:32.177500263Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 04:48:32.182696 containerd[1545]: time="2025-09-10T04:48:32.182638811Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 04:48:32.183167 containerd[1545]: time="2025-09-10T04:48:32.182869299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 04:48:32.183167 containerd[1545]: time="2025-09-10T04:48:32.182906323Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 04:48:32.183167 containerd[1545]: time="2025-09-10T04:48:32.182917426Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 04:48:32.183167 containerd[1545]: time="2025-09-10T04:48:32.182951094Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 04:48:32.183260 containerd[1545]: time="2025-09-10T04:48:32.183189410Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 04:48:32.183313 containerd[1545]: time="2025-09-10T04:48:32.183286941Z" level=info msg="metadata content store policy set" policy=shared Sep 10 04:48:32.187518 containerd[1545]: time="2025-09-10T04:48:32.187389113Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 04:48:32.187518 containerd[1545]: time="2025-09-10T04:48:32.187507892Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 04:48:32.187675 containerd[1545]: time="2025-09-10T04:48:32.187528500Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 04:48:32.187675 containerd[1545]: time="2025-09-10T04:48:32.187607859Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 04:48:32.187675 containerd[1545]: time="2025-09-10T04:48:32.187627908Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 04:48:32.187675 containerd[1545]: time="2025-09-10T04:48:32.187638971Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187651472Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187715535Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187730552Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187741615Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187750641Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 04:48:32.187770 containerd[1545]: time="2025-09-10T04:48:32.187762063Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 04:48:32.188026 containerd[1545]: time="2025-09-10T04:48:32.187984364Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 04:48:32.188084 containerd[1545]: time="2025-09-10T04:48:32.188066398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 04:48:32.188107 containerd[1545]: time="2025-09-10T04:48:32.188092319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 04:48:32.188124 containerd[1545]: time="2025-09-10T04:48:32.188107935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 04:48:32.188176 containerd[1545]: time="2025-09-10T04:48:32.188159776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 04:48:32.188194 containerd[1545]: time="2025-09-10T04:48:32.188180424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 04:48:32.188221 containerd[1545]: time="2025-09-10T04:48:32.188194403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 04:48:32.188221 containerd[1545]: time="2025-09-10T04:48:32.188204907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 04:48:32.188221 containerd[1545]: time="2025-09-10T04:48:32.188217168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 04:48:32.188269 containerd[1545]: time="2025-09-10T04:48:32.188227872Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 04:48:32.188299 containerd[1545]: time="2025-09-10T04:48:32.188238735Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 04:48:32.188725 containerd[1545]: time="2025-09-10T04:48:32.188577817Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 04:48:32.188754 containerd[1545]: time="2025-09-10T04:48:32.188734697Z" level=info msg="Start snapshots syncer" Sep 10 04:48:32.188977 containerd[1545]: time="2025-09-10T04:48:32.188781546Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 04:48:32.189175 containerd[1545]: time="2025-09-10T04:48:32.189133049Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 04:48:32.189314 containerd[1545]: time="2025-09-10T04:48:32.189288651Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 04:48:32.189420 containerd[1545]: time="2025-09-10T04:48:32.189397684Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 04:48:32.189570 containerd[1545]: time="2025-09-10T04:48:32.189523572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 04:48:32.189601 containerd[1545]: time="2025-09-10T04:48:32.189580725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 04:48:32.189619 containerd[1545]: time="2025-09-10T04:48:32.189598657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 04:48:32.189636 containerd[1545]: time="2025-09-10T04:48:32.189616710Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 04:48:32.189654 containerd[1545]: time="2025-09-10T04:48:32.189634602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 04:48:32.189654 containerd[1545]: time="2025-09-10T04:48:32.189648741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 04:48:32.189705 containerd[1545]: time="2025-09-10T04:48:32.189664197Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 04:48:32.189705 containerd[1545]: time="2025-09-10T04:48:32.189694870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 04:48:32.189748 containerd[1545]: time="2025-09-10T04:48:32.189711165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 04:48:32.189748 containerd[1545]: time="2025-09-10T04:48:32.189727101Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 04:48:32.189929 containerd[1545]: time="2025-09-10T04:48:32.189898679Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 04:48:32.189999 containerd[1545]: time="2025-09-10T04:48:32.189974882Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 04:48:32.190027 containerd[1545]: time="2025-09-10T04:48:32.189995810Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 04:48:32.190027 containerd[1545]: time="2025-09-10T04:48:32.190013543Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 04:48:32.190064 containerd[1545]: time="2025-09-10T04:48:32.190025285Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 04:48:32.190064 containerd[1545]: time="2025-09-10T04:48:32.190039224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 04:48:32.190064 containerd[1545]: time="2025-09-10T04:48:32.190051805Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 04:48:32.190192 containerd[1545]: time="2025-09-10T04:48:32.190171302Z" level=info msg="runtime interface created" Sep 10 04:48:32.190243 containerd[1545]: time="2025-09-10T04:48:32.190185401Z" level=info msg="created NRI interface" Sep 10 04:48:32.190269 containerd[1545]: time="2025-09-10T04:48:32.190248145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 04:48:32.190269 containerd[1545]: time="2025-09-10T04:48:32.190264280Z" level=info msg="Connect containerd service" Sep 10 04:48:32.190320 containerd[1545]: time="2025-09-10T04:48:32.190305497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 04:48:32.192462 containerd[1545]: time="2025-09-10T04:48:32.192368864Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 04:48:32.260617 containerd[1545]: time="2025-09-10T04:48:32.260488701Z" level=info msg="Start subscribing containerd event" Sep 10 04:48:32.260617 containerd[1545]: time="2025-09-10T04:48:32.260590146Z" level=info msg="Start recovering state" Sep 10 04:48:32.260869 containerd[1545]: time="2025-09-10T04:48:32.260760246Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 04:48:32.260869 containerd[1545]: time="2025-09-10T04:48:32.260814883Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 04:48:32.260997 containerd[1545]: time="2025-09-10T04:48:32.260973680Z" level=info msg="Start event monitor" Sep 10 04:48:32.261067 containerd[1545]: time="2025-09-10T04:48:32.261049684Z" level=info msg="Start cni network conf syncer for default" Sep 10 04:48:32.261130 containerd[1545]: time="2025-09-10T04:48:32.261120456Z" level=info msg="Start streaming server" Sep 10 04:48:32.261256 containerd[1545]: time="2025-09-10T04:48:32.261167624Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 04:48:32.261256 containerd[1545]: time="2025-09-10T04:48:32.261208122Z" level=info msg="runtime interface starting up..." Sep 10 04:48:32.261256 containerd[1545]: time="2025-09-10T04:48:32.261214992Z" level=info msg="starting plugins..." Sep 10 04:48:32.261256 containerd[1545]: time="2025-09-10T04:48:32.261231926Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 04:48:32.261523 containerd[1545]: time="2025-09-10T04:48:32.261510101Z" level=info msg="containerd successfully booted in 0.101365s" Sep 10 04:48:32.261635 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 04:48:32.284904 tar[1533]: linux-arm64/LICENSE Sep 10 04:48:32.284984 tar[1533]: linux-arm64/README.md Sep 10 04:48:32.304620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 04:48:32.882419 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 04:48:32.900932 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 04:48:32.903261 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 04:48:32.926455 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 04:48:32.928586 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 04:48:32.930670 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 04:48:32.955010 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 04:48:32.957902 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 04:48:32.960235 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 04:48:32.961679 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 04:48:33.525731 systemd-networkd[1438]: eth0: Gained IPv6LL Sep 10 04:48:33.528061 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 04:48:33.529529 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 04:48:33.531675 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 04:48:33.533830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:48:33.535606 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 04:48:33.554106 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 04:48:33.555589 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 04:48:33.556895 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 04:48:33.558552 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 04:48:34.067076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:48:34.068356 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 04:48:34.071967 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 04:48:34.072848 systemd[1]: Startup finished in 1.973s (kernel) + 4.809s (initrd) + 3.715s (userspace) = 10.498s. Sep 10 04:48:34.422380 kubelet[1640]: E0910 04:48:34.422272 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 04:48:34.424600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 04:48:34.424729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 04:48:34.425048 systemd[1]: kubelet.service: Consumed 752ms CPU time, 257M memory peak. Sep 10 04:48:38.086738 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 04:48:38.087719 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:57178.service - OpenSSH per-connection server daemon (10.0.0.1:57178). Sep 10 04:48:38.165930 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 57178 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:38.169228 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:38.175142 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 04:48:38.175986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 04:48:38.181968 systemd-logind[1516]: New session 1 of user core. Sep 10 04:48:38.200264 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 04:48:38.202721 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 04:48:38.219520 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 04:48:38.222706 systemd-logind[1516]: New session c1 of user core. Sep 10 04:48:38.331446 systemd[1658]: Queued start job for default target default.target. Sep 10 04:48:38.355415 systemd[1658]: Created slice app.slice - User Application Slice. Sep 10 04:48:38.355445 systemd[1658]: Reached target paths.target - Paths. Sep 10 04:48:38.355478 systemd[1658]: Reached target timers.target - Timers. Sep 10 04:48:38.357211 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 04:48:38.369766 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 04:48:38.369868 systemd[1658]: Reached target sockets.target - Sockets. Sep 10 04:48:38.369903 systemd[1658]: Reached target basic.target - Basic System. Sep 10 04:48:38.369930 systemd[1658]: Reached target default.target - Main User Target. Sep 10 04:48:38.369953 systemd[1658]: Startup finished in 141ms. Sep 10 04:48:38.370186 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 04:48:38.371399 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 04:48:38.432022 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:57194.service - OpenSSH per-connection server daemon (10.0.0.1:57194). Sep 10 04:48:38.506229 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 57194 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:38.506882 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:38.513532 systemd-logind[1516]: New session 2 of user core. Sep 10 04:48:38.532696 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 04:48:38.583096 sshd[1672]: Connection closed by 10.0.0.1 port 57194 Sep 10 04:48:38.583577 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 10 04:48:38.595413 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:57194.service: Deactivated successfully. Sep 10 04:48:38.596812 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 04:48:38.598728 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Sep 10 04:48:38.600511 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:57206.service - OpenSSH per-connection server daemon (10.0.0.1:57206). Sep 10 04:48:38.601233 systemd-logind[1516]: Removed session 2. Sep 10 04:48:38.656499 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 57206 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:38.657834 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:38.661573 systemd-logind[1516]: New session 3 of user core. Sep 10 04:48:38.676698 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 04:48:38.727578 sshd[1681]: Connection closed by 10.0.0.1 port 57206 Sep 10 04:48:38.727732 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Sep 10 04:48:38.739396 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:57206.service: Deactivated successfully. Sep 10 04:48:38.742782 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 04:48:38.743442 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Sep 10 04:48:38.745722 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:57222.service - OpenSSH per-connection server daemon (10.0.0.1:57222). Sep 10 04:48:38.746149 systemd-logind[1516]: Removed session 3. Sep 10 04:48:38.798805 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 57222 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:38.800091 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:38.803947 systemd-logind[1516]: New session 4 of user core. Sep 10 04:48:38.811738 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 04:48:38.864630 sshd[1690]: Connection closed by 10.0.0.1 port 57222 Sep 10 04:48:38.864647 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 10 04:48:38.878446 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:57222.service: Deactivated successfully. Sep 10 04:48:38.880909 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 04:48:38.882701 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Sep 10 04:48:38.885032 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:57230.service - OpenSSH per-connection server daemon (10.0.0.1:57230). Sep 10 04:48:38.885685 systemd-logind[1516]: Removed session 4. Sep 10 04:48:38.942288 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 57230 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:38.943575 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:38.948612 systemd-logind[1516]: New session 5 of user core. Sep 10 04:48:38.959706 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 04:48:39.014967 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 04:48:39.015220 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:48:39.029328 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 10 04:48:39.030668 sshd[1699]: Connection closed by 10.0.0.1 port 57230 Sep 10 04:48:39.031014 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 10 04:48:39.040363 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:57230.service: Deactivated successfully. Sep 10 04:48:39.041881 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 04:48:39.043353 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Sep 10 04:48:39.045014 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Sep 10 04:48:39.046407 systemd-logind[1516]: Removed session 5. Sep 10 04:48:39.098509 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:39.099705 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:39.103624 systemd-logind[1516]: New session 6 of user core. Sep 10 04:48:39.114678 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 04:48:39.165176 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 04:48:39.165784 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:48:39.250147 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 10 04:48:39.255083 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 04:48:39.255333 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:48:39.263923 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 04:48:39.296582 augenrules[1733]: No rules Sep 10 04:48:39.297610 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 04:48:39.297829 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 04:48:39.299217 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 10 04:48:39.300632 sshd[1709]: Connection closed by 10.0.0.1 port 57234 Sep 10 04:48:39.300941 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 10 04:48:39.308321 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:57234.service: Deactivated successfully. Sep 10 04:48:39.309776 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 04:48:39.310359 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Sep 10 04:48:39.312430 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). Sep 10 04:48:39.312975 systemd-logind[1516]: Removed session 6. Sep 10 04:48:39.367913 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:48:39.368971 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:48:39.373059 systemd-logind[1516]: New session 7 of user core. Sep 10 04:48:39.389675 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 04:48:39.439528 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 04:48:39.439796 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:48:39.711138 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 04:48:39.721847 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 04:48:39.912513 dockerd[1767]: time="2025-09-10T04:48:39.912451098Z" level=info msg="Starting up" Sep 10 04:48:39.913276 dockerd[1767]: time="2025-09-10T04:48:39.913255316Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 04:48:39.923120 dockerd[1767]: time="2025-09-10T04:48:39.923085961Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 04:48:40.302498 dockerd[1767]: time="2025-09-10T04:48:40.302434304Z" level=info msg="Loading containers: start." Sep 10 04:48:40.310573 kernel: Initializing XFRM netlink socket Sep 10 04:48:40.493180 systemd-networkd[1438]: docker0: Link UP Sep 10 04:48:40.497007 dockerd[1767]: time="2025-09-10T04:48:40.496960573Z" level=info msg="Loading containers: done." Sep 10 04:48:40.511330 dockerd[1767]: time="2025-09-10T04:48:40.511266258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 04:48:40.511466 dockerd[1767]: time="2025-09-10T04:48:40.511349303Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 04:48:40.511466 dockerd[1767]: time="2025-09-10T04:48:40.511437023Z" level=info msg="Initializing buildkit" Sep 10 04:48:40.533901 dockerd[1767]: time="2025-09-10T04:48:40.533862389Z" level=info msg="Completed buildkit initialization" Sep 10 04:48:40.538470 dockerd[1767]: time="2025-09-10T04:48:40.538435981Z" level=info msg="Daemon has completed initialization" Sep 10 04:48:40.538636 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 04:48:40.539060 dockerd[1767]: time="2025-09-10T04:48:40.538575454Z" level=info msg="API listen on /run/docker.sock" Sep 10 04:48:41.053164 containerd[1545]: time="2025-09-10T04:48:41.052973883Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 10 04:48:41.567490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266654598.mount: Deactivated successfully. Sep 10 04:48:42.683519 containerd[1545]: time="2025-09-10T04:48:42.683453594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:42.684137 containerd[1545]: time="2025-09-10T04:48:42.684095281Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 10 04:48:42.684792 containerd[1545]: time="2025-09-10T04:48:42.684761147Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:42.687968 containerd[1545]: time="2025-09-10T04:48:42.687933127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:42.689504 containerd[1545]: time="2025-09-10T04:48:42.689471256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.636455728s" Sep 10 04:48:42.689536 containerd[1545]: time="2025-09-10T04:48:42.689507826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 10 04:48:42.690783 containerd[1545]: time="2025-09-10T04:48:42.690760184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 10 04:48:43.957080 containerd[1545]: time="2025-09-10T04:48:43.957032214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:43.957948 containerd[1545]: time="2025-09-10T04:48:43.957685963Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 10 04:48:43.958711 containerd[1545]: time="2025-09-10T04:48:43.958666907Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:43.961271 containerd[1545]: time="2025-09-10T04:48:43.961241055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:43.963123 containerd[1545]: time="2025-09-10T04:48:43.963094943Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.272300986s" Sep 10 04:48:43.963200 containerd[1545]: time="2025-09-10T04:48:43.963126959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 10 04:48:43.963775 containerd[1545]: time="2025-09-10T04:48:43.963588492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 10 04:48:44.675264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 04:48:44.677721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:48:44.810017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:48:44.813243 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 04:48:44.883269 kubelet[2054]: E0910 04:48:44.883181 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 04:48:44.886532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 04:48:44.886696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 04:48:44.888621 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.5M memory peak. Sep 10 04:48:45.237583 containerd[1545]: time="2025-09-10T04:48:45.236871225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:45.237910 containerd[1545]: time="2025-09-10T04:48:45.237589112Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 10 04:48:45.238307 containerd[1545]: time="2025-09-10T04:48:45.238098096Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:45.240673 containerd[1545]: time="2025-09-10T04:48:45.240639179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:45.241692 containerd[1545]: time="2025-09-10T04:48:45.241655309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.27803428s" Sep 10 04:48:45.241736 containerd[1545]: time="2025-09-10T04:48:45.241694403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 10 04:48:45.242560 containerd[1545]: time="2025-09-10T04:48:45.242500431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 10 04:48:46.179873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484565691.mount: Deactivated successfully. Sep 10 04:48:46.386978 containerd[1545]: time="2025-09-10T04:48:46.386927776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:46.387588 containerd[1545]: time="2025-09-10T04:48:46.387563223Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 10 04:48:46.388392 containerd[1545]: time="2025-09-10T04:48:46.388351696Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:46.390217 containerd[1545]: time="2025-09-10T04:48:46.390179126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:46.391160 containerd[1545]: time="2025-09-10T04:48:46.391121263Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.148592929s" Sep 10 04:48:46.391160 containerd[1545]: time="2025-09-10T04:48:46.391155842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 10 04:48:46.391777 containerd[1545]: time="2025-09-10T04:48:46.391756710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 04:48:46.940407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338865622.mount: Deactivated successfully. Sep 10 04:48:47.551730 containerd[1545]: time="2025-09-10T04:48:47.551685221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:47.552917 containerd[1545]: time="2025-09-10T04:48:47.552886284Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 04:48:47.553810 containerd[1545]: time="2025-09-10T04:48:47.553755460Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:47.556294 containerd[1545]: time="2025-09-10T04:48:47.556262527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:47.558125 containerd[1545]: time="2025-09-10T04:48:47.558089387Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.166303335s" Sep 10 04:48:47.558178 containerd[1545]: time="2025-09-10T04:48:47.558132083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 04:48:47.558588 containerd[1545]: time="2025-09-10T04:48:47.558566311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 04:48:47.969220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707024932.mount: Deactivated successfully. Sep 10 04:48:47.973035 containerd[1545]: time="2025-09-10T04:48:47.972979122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:48:47.973712 containerd[1545]: time="2025-09-10T04:48:47.973662726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 04:48:47.974376 containerd[1545]: time="2025-09-10T04:48:47.974332897Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:48:47.976904 containerd[1545]: time="2025-09-10T04:48:47.976862590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:48:47.977453 containerd[1545]: time="2025-09-10T04:48:47.977311610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 418.712998ms" Sep 10 04:48:47.977453 containerd[1545]: time="2025-09-10T04:48:47.977340953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 04:48:47.977917 containerd[1545]: time="2025-09-10T04:48:47.977892473Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 04:48:48.435578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061312446.mount: Deactivated successfully. Sep 10 04:48:50.163472 containerd[1545]: time="2025-09-10T04:48:50.163406239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:50.163918 containerd[1545]: time="2025-09-10T04:48:50.163885849Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 10 04:48:50.164822 containerd[1545]: time="2025-09-10T04:48:50.164782661Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:50.168032 containerd[1545]: time="2025-09-10T04:48:50.167991248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:48:50.169177 containerd[1545]: time="2025-09-10T04:48:50.169067134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.191146117s" Sep 10 04:48:50.169177 containerd[1545]: time="2025-09-10T04:48:50.169099039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 10 04:48:54.363967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:48:54.364123 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.5M memory peak. Sep 10 04:48:54.366000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:48:54.384607 systemd[1]: Reload requested from client PID 2213 ('systemctl') (unit session-7.scope)... Sep 10 04:48:54.384622 systemd[1]: Reloading... Sep 10 04:48:54.448592 zram_generator::config[2257]: No configuration found. Sep 10 04:48:54.635592 systemd[1]: Reloading finished in 250 ms. Sep 10 04:48:54.697015 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 04:48:54.697083 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 04:48:54.697312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:48:54.697348 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. Sep 10 04:48:54.698664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:48:54.823203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:48:54.827273 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 04:48:54.858737 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:48:54.858737 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 04:48:54.858737 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:48:54.859020 kubelet[2302]: I0910 04:48:54.858795 2302 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 04:48:55.797661 kubelet[2302]: I0910 04:48:55.797603 2302 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 04:48:55.797661 kubelet[2302]: I0910 04:48:55.797637 2302 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 04:48:55.797904 kubelet[2302]: I0910 04:48:55.797871 2302 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 04:48:55.817250 kubelet[2302]: E0910 04:48:55.817213 2302 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 10 04:48:55.819102 kubelet[2302]: I0910 04:48:55.819065 2302 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 04:48:55.826392 kubelet[2302]: I0910 04:48:55.826364 2302 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 04:48:55.829887 kubelet[2302]: I0910 04:48:55.829868 2302 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 04:48:55.830670 kubelet[2302]: I0910 04:48:55.830643 2302 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 04:48:55.830804 kubelet[2302]: I0910 04:48:55.830779 2302 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 04:48:55.830991 kubelet[2302]: I0910 04:48:55.830808 2302 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 04:48:55.831138 kubelet[2302]: I0910 04:48:55.831127 2302 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 04:48:55.831138 kubelet[2302]: I0910 04:48:55.831138 2302 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 04:48:55.831376 kubelet[2302]: I0910 04:48:55.831351 2302 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:48:55.833536 kubelet[2302]: I0910 04:48:55.833503 2302 kubelet.go:408] "Attempting to sync node with API server" Sep 10 04:48:55.833536 kubelet[2302]: I0910 04:48:55.833533 2302 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 04:48:55.833608 kubelet[2302]: I0910 04:48:55.833573 2302 kubelet.go:314] "Adding apiserver pod source" Sep 10 04:48:55.833663 kubelet[2302]: I0910 04:48:55.833652 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 04:48:55.836364 kubelet[2302]: W0910 04:48:55.836273 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 10 04:48:55.836364 kubelet[2302]: E0910 04:48:55.836330 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 10 04:48:55.837062 kubelet[2302]: W0910 04:48:55.837029 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 10 04:48:55.837104 kubelet[2302]: E0910 04:48:55.837074 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 10 04:48:55.838577 kubelet[2302]: I0910 04:48:55.838464 2302 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 04:48:55.841572 kubelet[2302]: I0910 04:48:55.839945 2302 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 04:48:55.841572 kubelet[2302]: W0910 04:48:55.840123 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 04:48:55.841572 kubelet[2302]: I0910 04:48:55.841072 2302 server.go:1274] "Started kubelet" Sep 10 04:48:55.841771 kubelet[2302]: I0910 04:48:55.841712 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 04:48:55.842042 kubelet[2302]: I0910 04:48:55.842012 2302 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 04:48:55.842336 kubelet[2302]: I0910 04:48:55.842172 2302 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 04:48:55.843493 kubelet[2302]: I0910 04:48:55.843423 2302 server.go:449] "Adding debug handlers to kubelet server" Sep 10 04:48:55.843633 kubelet[2302]: I0910 04:48:55.843615 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 04:48:55.844333 kubelet[2302]: I0910 04:48:55.844008 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 04:48:55.844376 kubelet[2302]: I0910 04:48:55.844361 2302 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 04:48:55.844835 kubelet[2302]: I0910 04:48:55.844818 2302 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 04:48:55.844882 kubelet[2302]: I0910 04:48:55.844865 2302 reconciler.go:26] "Reconciler: start to sync state" Sep 10 04:48:55.846039 kubelet[2302]: E0910 04:48:55.846004 2302 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 04:48:55.846095 kubelet[2302]: E0910 04:48:55.846059 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Sep 10 04:48:55.846455 kubelet[2302]: I0910 04:48:55.846435 2302 factory.go:221] Registration of the systemd container factory successfully Sep 10 04:48:55.846553 kubelet[2302]: I0910 04:48:55.846527 2302 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 04:48:55.848168 kubelet[2302]: I0910 04:48:55.848147 2302 factory.go:221] Registration of the containerd container factory successfully Sep 10 04:48:55.848349 kubelet[2302]: E0910 04:48:55.847252 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863d2821ac0c0a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 04:48:55.841046692 +0000 UTC m=+1.010763744,LastTimestamp:2025-09-10 04:48:55.841046692 +0000 UTC m=+1.010763744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 04:48:55.848746 kubelet[2302]: W0910 04:48:55.848698 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 10 04:48:55.848795 kubelet[2302]: E0910 04:48:55.848750 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 10 04:48:55.857428 kubelet[2302]: I0910 04:48:55.857391 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 04:48:55.858842 kubelet[2302]: I0910 04:48:55.858807 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 04:48:55.858842 kubelet[2302]: I0910 04:48:55.858837 2302 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 04:48:55.859090 kubelet[2302]: I0910 04:48:55.858854 2302 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 04:48:55.859090 kubelet[2302]: E0910 04:48:55.858892 2302 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 04:48:55.861461 kubelet[2302]: I0910 04:48:55.861330 2302 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 04:48:55.861461 kubelet[2302]: I0910 04:48:55.861464 2302 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 04:48:55.861586 kubelet[2302]: I0910 04:48:55.861490 2302 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:48:55.946610 kubelet[2302]: E0910 04:48:55.946536 2302 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 04:48:55.959805 kubelet[2302]: E0910 04:48:55.959773 2302 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 04:48:55.969783 kubelet[2302]: I0910 04:48:55.969757 2302 policy_none.go:49] "None policy: Start" Sep 10 04:48:55.970263 kubelet[2302]: W0910 04:48:55.970198 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 10 04:48:55.970326 kubelet[2302]: E0910 04:48:55.970271 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 10 04:48:55.970548 kubelet[2302]: I0910 04:48:55.970521 2302 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 04:48:55.970633 kubelet[2302]: I0910 04:48:55.970596 2302 state_mem.go:35] "Initializing new in-memory state store" Sep 10 04:48:55.978656 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 04:48:55.993000 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 04:48:55.995712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 04:48:56.012390 kubelet[2302]: I0910 04:48:56.012371 2302 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 04:48:56.012686 kubelet[2302]: I0910 04:48:56.012669 2302 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 04:48:56.012790 kubelet[2302]: I0910 04:48:56.012756 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 04:48:56.013008 kubelet[2302]: I0910 04:48:56.012981 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 04:48:56.014235 kubelet[2302]: E0910 04:48:56.014209 2302 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 04:48:56.047510 kubelet[2302]: E0910 04:48:56.047462 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Sep 10 04:48:56.114341 kubelet[2302]: I0910 04:48:56.114271 2302 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 04:48:56.115371 kubelet[2302]: E0910 04:48:56.115345 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 10 04:48:56.168643 systemd[1]: Created slice kubepods-burstable-pod1ec6968b5b243c871f7b6868f17a4ef4.slice - libcontainer container kubepods-burstable-pod1ec6968b5b243c871f7b6868f17a4ef4.slice. Sep 10 04:48:56.186476 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 10 04:48:56.208022 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 10 04:48:56.246741 kubelet[2302]: I0910 04:48:56.246702 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:48:56.246803 kubelet[2302]: I0910 04:48:56.246760 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:48:56.246803 kubelet[2302]: I0910 04:48:56.246793 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 04:48:56.246854 kubelet[2302]: I0910 04:48:56.246821 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:48:56.246854 kubelet[2302]: I0910 04:48:56.246839 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:48:56.246854 kubelet[2302]: I0910 04:48:56.246853 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:48:56.246912 kubelet[2302]: I0910 04:48:56.246868 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:48:56.246912 kubelet[2302]: I0910 04:48:56.246881 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:48:56.246912 kubelet[2302]: I0910 04:48:56.246897 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:48:56.316499 kubelet[2302]: I0910 04:48:56.316476 2302 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 04:48:56.316768 kubelet[2302]: E0910 04:48:56.316746 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 10 04:48:56.448879 kubelet[2302]: E0910 04:48:56.448795 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Sep 10 04:48:56.484519 containerd[1545]: time="2025-09-10T04:48:56.484480620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ec6968b5b243c871f7b6868f17a4ef4,Namespace:kube-system,Attempt:0,}" Sep 10 04:48:56.502193 containerd[1545]: time="2025-09-10T04:48:56.502147372Z" level=info msg="connecting to shim 89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435" address="unix:///run/containerd/s/b2b25e9b6e17d93788d46809930696ab2f62ec17f4087247684e9ec3084a5ddc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:48:56.507306 containerd[1545]: time="2025-09-10T04:48:56.507055740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 10 04:48:56.510867 containerd[1545]: time="2025-09-10T04:48:56.510835315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 10 04:48:56.527677 systemd[1]: Started cri-containerd-89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435.scope - libcontainer container 89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435. Sep 10 04:48:56.531298 containerd[1545]: time="2025-09-10T04:48:56.531242217Z" level=info msg="connecting to shim 44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4" address="unix:///run/containerd/s/8b0e8b94faf8d6903cbf304ff6388076952173219a4de42f4ca0322d1add3b89" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:48:56.538273 containerd[1545]: time="2025-09-10T04:48:56.538225793Z" level=info msg="connecting to shim 4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4" address="unix:///run/containerd/s/e4daa63389a8983648c3f960c94fbcb39d2be64fb5b11c107c41ffac3981539b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:48:56.558687 systemd[1]: Started cri-containerd-44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4.scope - libcontainer container 44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4. Sep 10 04:48:56.562190 systemd[1]: Started cri-containerd-4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4.scope - libcontainer container 4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4. Sep 10 04:48:56.576816 containerd[1545]: time="2025-09-10T04:48:56.576782451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ec6968b5b243c871f7b6868f17a4ef4,Namespace:kube-system,Attempt:0,} returns sandbox id \"89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435\"" Sep 10 04:48:56.580266 containerd[1545]: time="2025-09-10T04:48:56.580224015Z" level=info msg="CreateContainer within sandbox \"89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 04:48:56.595313 containerd[1545]: time="2025-09-10T04:48:56.595282172Z" level=info msg="Container 565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:48:56.606251 containerd[1545]: time="2025-09-10T04:48:56.606125816Z" level=info msg="CreateContainer within sandbox \"89fcc8197a492b5c8ae90318659ea084c3711a55f5140ac333de5be8a3b78435\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb\"" Sep 10 04:48:56.607733 containerd[1545]: time="2025-09-10T04:48:56.607709622Z" level=info msg="StartContainer for \"565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb\"" Sep 10 04:48:56.608982 containerd[1545]: time="2025-09-10T04:48:56.608958617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4\"" Sep 10 04:48:56.609185 containerd[1545]: time="2025-09-10T04:48:56.609157153Z" level=info msg="connecting to shim 565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb" address="unix:///run/containerd/s/b2b25e9b6e17d93788d46809930696ab2f62ec17f4087247684e9ec3084a5ddc" protocol=ttrpc version=3 Sep 10 04:48:56.610034 containerd[1545]: time="2025-09-10T04:48:56.609994601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4\"" Sep 10 04:48:56.611601 containerd[1545]: time="2025-09-10T04:48:56.611574209Z" level=info msg="CreateContainer within sandbox \"4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 04:48:56.612779 containerd[1545]: time="2025-09-10T04:48:56.612752907Z" level=info msg="CreateContainer within sandbox \"44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 04:48:56.618795 containerd[1545]: time="2025-09-10T04:48:56.618767476Z" level=info msg="Container 71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:48:56.627675 systemd[1]: Started cri-containerd-565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb.scope - libcontainer container 565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb. Sep 10 04:48:56.629704 containerd[1545]: time="2025-09-10T04:48:56.629674780Z" level=info msg="CreateContainer within sandbox \"44debf7b6b9333379b49eeec8bd263c20c82d4c7f9134c7f1c48f64af9dfc3d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809\"" Sep 10 04:48:56.630042 containerd[1545]: time="2025-09-10T04:48:56.630024386Z" level=info msg="StartContainer for \"71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809\"" Sep 10 04:48:56.630849 containerd[1545]: time="2025-09-10T04:48:56.630822208Z" level=info msg="Container 5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:48:56.632712 containerd[1545]: time="2025-09-10T04:48:56.632682564Z" level=info msg="connecting to shim 71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809" address="unix:///run/containerd/s/8b0e8b94faf8d6903cbf304ff6388076952173219a4de42f4ca0322d1add3b89" protocol=ttrpc version=3 Sep 10 04:48:56.638208 containerd[1545]: time="2025-09-10T04:48:56.638175703Z" level=info msg="CreateContainer within sandbox \"4e794d06acf39f750759e0a6f4fffcf5bb8f318651c6300cde957fe4b0cfd5d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f\"" Sep 10 04:48:56.638777 containerd[1545]: time="2025-09-10T04:48:56.638734802Z" level=info msg="StartContainer for \"5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f\"" Sep 10 04:48:56.639945 containerd[1545]: time="2025-09-10T04:48:56.639911060Z" level=info msg="connecting to shim 5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f" address="unix:///run/containerd/s/e4daa63389a8983648c3f960c94fbcb39d2be64fb5b11c107c41ffac3981539b" protocol=ttrpc version=3 Sep 10 04:48:56.653704 systemd[1]: Started cri-containerd-71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809.scope - libcontainer container 71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809. Sep 10 04:48:56.666837 systemd[1]: Started cri-containerd-5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f.scope - libcontainer container 5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f. Sep 10 04:48:56.674941 containerd[1545]: time="2025-09-10T04:48:56.674900915Z" level=info msg="StartContainer for \"565927b11f857a8d03625e988642475692bad0160f81a8e4ff1851013b9023fb\" returns successfully" Sep 10 04:48:56.711220 containerd[1545]: time="2025-09-10T04:48:56.711111173Z" level=info msg="StartContainer for \"71abecf33c920b9900cc4456432c4f5655fdbf22fc3fde35460b20c86438a809\" returns successfully" Sep 10 04:48:56.712736 containerd[1545]: time="2025-09-10T04:48:56.712701937Z" level=info msg="StartContainer for \"5358857c3ac4a545c3ef5ffa3e6891f7813ae91767a82021645d258f225bc00f\" returns successfully" Sep 10 04:48:56.718860 kubelet[2302]: I0910 04:48:56.718752 2302 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 04:48:56.719237 kubelet[2302]: E0910 04:48:56.719205 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 10 04:48:57.521397 kubelet[2302]: I0910 04:48:57.521364 2302 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 04:48:57.705976 kubelet[2302]: E0910 04:48:57.705940 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 04:48:57.793774 kubelet[2302]: I0910 04:48:57.792919 2302 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 04:48:57.793774 kubelet[2302]: E0910 04:48:57.792962 2302 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 04:48:57.834978 kubelet[2302]: I0910 04:48:57.834941 2302 apiserver.go:52] "Watching apiserver" Sep 10 04:48:57.845477 kubelet[2302]: I0910 04:48:57.845435 2302 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 04:48:57.882718 kubelet[2302]: E0910 04:48:57.882684 2302 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 04:48:59.761138 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-7.scope)... Sep 10 04:48:59.761153 systemd[1]: Reloading... Sep 10 04:48:59.823598 zram_generator::config[2617]: No configuration found. Sep 10 04:48:59.996999 systemd[1]: Reloading finished in 235 ms. Sep 10 04:49:00.027914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:49:00.044495 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 04:49:00.045641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:49:00.045702 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 128.4M memory peak. Sep 10 04:49:00.047292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:49:00.176171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:49:00.190868 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 04:49:00.226023 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:49:00.226023 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 04:49:00.226023 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:49:00.226373 kubelet[2659]: I0910 04:49:00.226062 2659 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 04:49:00.232160 kubelet[2659]: I0910 04:49:00.232115 2659 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 04:49:00.232160 kubelet[2659]: I0910 04:49:00.232155 2659 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 04:49:00.232392 kubelet[2659]: I0910 04:49:00.232366 2659 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 04:49:00.233730 kubelet[2659]: I0910 04:49:00.233707 2659 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 04:49:00.235808 kubelet[2659]: I0910 04:49:00.235787 2659 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 04:49:00.239929 kubelet[2659]: I0910 04:49:00.239904 2659 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 04:49:00.243024 kubelet[2659]: I0910 04:49:00.242992 2659 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 04:49:00.243157 kubelet[2659]: I0910 04:49:00.243141 2659 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 04:49:00.243291 kubelet[2659]: I0910 04:49:00.243267 2659 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 04:49:00.243447 kubelet[2659]: I0910 04:49:00.243291 2659 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 04:49:00.243512 kubelet[2659]: I0910 04:49:00.243455 2659 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 04:49:00.243512 kubelet[2659]: I0910 04:49:00.243464 2659 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 04:49:00.243512 kubelet[2659]: I0910 04:49:00.243499 2659 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:49:00.243622 kubelet[2659]: I0910 04:49:00.243610 2659 kubelet.go:408] "Attempting to sync node with API server" Sep 10 04:49:00.243651 kubelet[2659]: I0910 04:49:00.243625 2659 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 04:49:00.243651 kubelet[2659]: I0910 04:49:00.243642 2659 kubelet.go:314] "Adding apiserver pod source" Sep 10 04:49:00.244510 kubelet[2659]: I0910 04:49:00.243654 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 04:49:00.244510 kubelet[2659]: I0910 04:49:00.244155 2659 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 04:49:00.244791 kubelet[2659]: I0910 04:49:00.244760 2659 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 04:49:00.246574 kubelet[2659]: I0910 04:49:00.245272 2659 server.go:1274] "Started kubelet" Sep 10 04:49:00.246574 kubelet[2659]: I0910 04:49:00.245920 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 04:49:00.246574 kubelet[2659]: I0910 04:49:00.246169 2659 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 04:49:00.247608 kubelet[2659]: I0910 04:49:00.247567 2659 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 04:49:00.251187 kubelet[2659]: E0910 04:49:00.248650 2659 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 04:49:00.251187 kubelet[2659]: I0910 04:49:00.249362 2659 server.go:449] "Adding debug handlers to kubelet server" Sep 10 04:49:00.251575 kubelet[2659]: I0910 04:49:00.251558 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 04:49:00.256256 kubelet[2659]: I0910 04:49:00.256219 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 04:49:00.256397 kubelet[2659]: I0910 04:49:00.256380 2659 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 04:49:00.257635 kubelet[2659]: E0910 04:49:00.256783 2659 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 04:49:00.258940 kubelet[2659]: I0910 04:49:00.258766 2659 factory.go:221] Registration of the systemd container factory successfully Sep 10 04:49:00.258940 kubelet[2659]: I0910 04:49:00.258865 2659 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 04:49:00.259803 kubelet[2659]: I0910 04:49:00.259784 2659 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 04:49:00.259996 kubelet[2659]: I0910 04:49:00.259982 2659 reconciler.go:26] "Reconciler: start to sync state" Sep 10 04:49:00.261449 kubelet[2659]: I0910 04:49:00.261418 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 04:49:00.262124 kubelet[2659]: I0910 04:49:00.262098 2659 factory.go:221] Registration of the containerd container factory successfully Sep 10 04:49:00.262994 kubelet[2659]: I0910 04:49:00.262973 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 04:49:00.263133 kubelet[2659]: I0910 04:49:00.263120 2659 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 04:49:00.263225 kubelet[2659]: I0910 04:49:00.263214 2659 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 04:49:00.263313 kubelet[2659]: E0910 04:49:00.263297 2659 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 04:49:00.294057 kubelet[2659]: I0910 04:49:00.293963 2659 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 04:49:00.294057 kubelet[2659]: I0910 04:49:00.293984 2659 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 04:49:00.294057 kubelet[2659]: I0910 04:49:00.294006 2659 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:49:00.294216 kubelet[2659]: I0910 04:49:00.294158 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 04:49:00.294216 kubelet[2659]: I0910 04:49:00.294180 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 04:49:00.294216 kubelet[2659]: I0910 04:49:00.294199 2659 policy_none.go:49] "None policy: Start" Sep 10 04:49:00.295155 kubelet[2659]: I0910 04:49:00.294784 2659 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 04:49:00.295155 kubelet[2659]: I0910 04:49:00.294810 2659 state_mem.go:35] "Initializing new in-memory state store" Sep 10 04:49:00.295155 kubelet[2659]: I0910 04:49:00.294950 2659 state_mem.go:75] "Updated machine memory state" Sep 10 04:49:00.299388 kubelet[2659]: I0910 04:49:00.299366 2659 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 04:49:00.299546 kubelet[2659]: I0910 04:49:00.299521 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 04:49:00.299778 kubelet[2659]: I0910 04:49:00.299557 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 04:49:00.299778 kubelet[2659]: I0910 04:49:00.299698 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 04:49:00.370765 kubelet[2659]: E0910 04:49:00.370725 2659 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 04:49:00.401483 kubelet[2659]: I0910 04:49:00.401457 2659 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 04:49:00.408407 kubelet[2659]: I0910 04:49:00.408375 2659 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 04:49:00.408523 kubelet[2659]: I0910 04:49:00.408457 2659 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 04:49:00.562212 kubelet[2659]: I0910 04:49:00.561851 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:49:00.562212 kubelet[2659]: I0910 04:49:00.561900 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:49:00.562212 kubelet[2659]: I0910 04:49:00.561919 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:49:00.562212 kubelet[2659]: I0910 04:49:00.561936 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:49:00.562212 kubelet[2659]: I0910 04:49:00.561983 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:49:00.562383 kubelet[2659]: I0910 04:49:00.562025 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:49:00.562383 kubelet[2659]: I0910 04:49:00.562043 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 04:49:00.562383 kubelet[2659]: I0910 04:49:00.562058 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ec6968b5b243c871f7b6868f17a4ef4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ec6968b5b243c871f7b6868f17a4ef4\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:49:00.562383 kubelet[2659]: I0910 04:49:00.562072 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:49:01.245348 kubelet[2659]: I0910 04:49:01.244857 2659 apiserver.go:52] "Watching apiserver" Sep 10 04:49:01.260423 kubelet[2659]: I0910 04:49:01.260380 2659 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 04:49:01.289134 kubelet[2659]: E0910 04:49:01.289098 2659 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 04:49:01.300997 kubelet[2659]: I0910 04:49:01.300197 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.300182763 podStartE2EDuration="1.300182763s" podCreationTimestamp="2025-09-10 04:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:01.299930582 +0000 UTC m=+1.105881783" watchObservedRunningTime="2025-09-10 04:49:01.300182763 +0000 UTC m=+1.106133964" Sep 10 04:49:01.311448 kubelet[2659]: I0910 04:49:01.311387 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.311358139 podStartE2EDuration="1.311358139s" podCreationTimestamp="2025-09-10 04:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:01.310867974 +0000 UTC m=+1.116819175" watchObservedRunningTime="2025-09-10 04:49:01.311358139 +0000 UTC m=+1.117309340" Sep 10 04:49:01.329083 kubelet[2659]: I0910 04:49:01.328846 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.328831596 podStartE2EDuration="2.328831596s" podCreationTimestamp="2025-09-10 04:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:01.318279034 +0000 UTC m=+1.124230235" watchObservedRunningTime="2025-09-10 04:49:01.328831596 +0000 UTC m=+1.134782797" Sep 10 04:49:06.032639 kubelet[2659]: I0910 04:49:06.032602 2659 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 04:49:06.033719 kubelet[2659]: I0910 04:49:06.033244 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 04:49:06.033759 containerd[1545]: time="2025-09-10T04:49:06.032957910Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 04:49:06.707354 systemd[1]: Created slice kubepods-besteffort-pod1d0302ba_cfe1_4f73_bb3a_bf0bc4eb2be2.slice - libcontainer container kubepods-besteffort-pod1d0302ba_cfe1_4f73_bb3a_bf0bc4eb2be2.slice. Sep 10 04:49:06.800741 kubelet[2659]: I0910 04:49:06.800651 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k25c\" (UniqueName: \"kubernetes.io/projected/1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2-kube-api-access-2k25c\") pod \"kube-proxy-6jqcg\" (UID: \"1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2\") " pod="kube-system/kube-proxy-6jqcg" Sep 10 04:49:06.800741 kubelet[2659]: I0910 04:49:06.800694 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2-kube-proxy\") pod \"kube-proxy-6jqcg\" (UID: \"1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2\") " pod="kube-system/kube-proxy-6jqcg" Sep 10 04:49:06.800741 kubelet[2659]: I0910 04:49:06.800713 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2-xtables-lock\") pod \"kube-proxy-6jqcg\" (UID: \"1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2\") " pod="kube-system/kube-proxy-6jqcg" Sep 10 04:49:06.801010 kubelet[2659]: I0910 04:49:06.800970 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2-lib-modules\") pod \"kube-proxy-6jqcg\" (UID: \"1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2\") " pod="kube-system/kube-proxy-6jqcg" Sep 10 04:49:07.018111 containerd[1545]: time="2025-09-10T04:49:07.018004706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jqcg,Uid:1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2,Namespace:kube-system,Attempt:0,}" Sep 10 04:49:07.038004 containerd[1545]: time="2025-09-10T04:49:07.037909533Z" level=info msg="connecting to shim e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63" address="unix:///run/containerd/s/a2da7036d63ce8d8bc0637678b6558dc811c555cf0866fa29bf0a3d7da914772" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:07.070771 systemd[1]: Started cri-containerd-e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63.scope - libcontainer container e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63. Sep 10 04:49:07.098079 systemd[1]: Created slice kubepods-besteffort-pod835b4b99_0f44_4d8f_8413_19a0eff4927a.slice - libcontainer container kubepods-besteffort-pod835b4b99_0f44_4d8f_8413_19a0eff4927a.slice. Sep 10 04:49:07.104202 kubelet[2659]: I0910 04:49:07.104165 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmr9\" (UniqueName: \"kubernetes.io/projected/835b4b99-0f44-4d8f-8413-19a0eff4927a-kube-api-access-plmr9\") pod \"tigera-operator-58fc44c59b-hptv5\" (UID: \"835b4b99-0f44-4d8f-8413-19a0eff4927a\") " pod="tigera-operator/tigera-operator-58fc44c59b-hptv5" Sep 10 04:49:07.104202 kubelet[2659]: I0910 04:49:07.104200 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/835b4b99-0f44-4d8f-8413-19a0eff4927a-var-lib-calico\") pod \"tigera-operator-58fc44c59b-hptv5\" (UID: \"835b4b99-0f44-4d8f-8413-19a0eff4927a\") " pod="tigera-operator/tigera-operator-58fc44c59b-hptv5" Sep 10 04:49:07.113075 containerd[1545]: time="2025-09-10T04:49:07.113033998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jqcg,Uid:1d0302ba-cfe1-4f73-bb3a-bf0bc4eb2be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63\"" Sep 10 04:49:07.117061 containerd[1545]: time="2025-09-10T04:49:07.117019283Z" level=info msg="CreateContainer within sandbox \"e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 04:49:07.127579 containerd[1545]: time="2025-09-10T04:49:07.127498532Z" level=info msg="Container a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:07.130898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472850867.mount: Deactivated successfully. Sep 10 04:49:07.137750 containerd[1545]: time="2025-09-10T04:49:07.137687268Z" level=info msg="CreateContainer within sandbox \"e11ca25cfd3767902e13e54830cae2b802ba318328261666102ed2a0699c2c63\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950\"" Sep 10 04:49:07.139093 containerd[1545]: time="2025-09-10T04:49:07.139055650Z" level=info msg="StartContainer for \"a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950\"" Sep 10 04:49:07.143165 containerd[1545]: time="2025-09-10T04:49:07.143123642Z" level=info msg="connecting to shim a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950" address="unix:///run/containerd/s/a2da7036d63ce8d8bc0637678b6558dc811c555cf0866fa29bf0a3d7da914772" protocol=ttrpc version=3 Sep 10 04:49:07.161745 systemd[1]: Started cri-containerd-a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950.scope - libcontainer container a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950. Sep 10 04:49:07.195581 containerd[1545]: time="2025-09-10T04:49:07.194919825Z" level=info msg="StartContainer for \"a8a3bb5520dd1467854dccbd1ff9f1b5c6da5118cd65656aaf7154135f490950\" returns successfully" Sep 10 04:49:07.303352 kubelet[2659]: I0910 04:49:07.302402 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6jqcg" podStartSLOduration=1.302382855 podStartE2EDuration="1.302382855s" podCreationTimestamp="2025-09-10 04:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:07.302156931 +0000 UTC m=+7.108108132" watchObservedRunningTime="2025-09-10 04:49:07.302382855 +0000 UTC m=+7.108334056" Sep 10 04:49:07.402444 containerd[1545]: time="2025-09-10T04:49:07.402404830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-hptv5,Uid:835b4b99-0f44-4d8f-8413-19a0eff4927a,Namespace:tigera-operator,Attempt:0,}" Sep 10 04:49:07.420560 containerd[1545]: time="2025-09-10T04:49:07.420454273Z" level=info msg="connecting to shim d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b" address="unix:///run/containerd/s/5f8aa4f76ee1353e5ea4e6f355b9522a18e6b501a6fbf0ed6628ab8e4fc655ba" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:07.441756 systemd[1]: Started cri-containerd-d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b.scope - libcontainer container d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b. Sep 10 04:49:07.475367 containerd[1545]: time="2025-09-10T04:49:07.475124558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-hptv5,Uid:835b4b99-0f44-4d8f-8413-19a0eff4927a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b\"" Sep 10 04:49:07.477630 containerd[1545]: time="2025-09-10T04:49:07.477512818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 04:49:08.676990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142716138.mount: Deactivated successfully. Sep 10 04:49:09.242753 containerd[1545]: time="2025-09-10T04:49:09.242696746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:09.243383 containerd[1545]: time="2025-09-10T04:49:09.243355493Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 04:49:09.244019 containerd[1545]: time="2025-09-10T04:49:09.243978366Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:09.246242 containerd[1545]: time="2025-09-10T04:49:09.246208454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:09.247096 containerd[1545]: time="2025-09-10T04:49:09.247065614Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.769455492s" Sep 10 04:49:09.247096 containerd[1545]: time="2025-09-10T04:49:09.247092530Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 04:49:09.248855 containerd[1545]: time="2025-09-10T04:49:09.248821607Z" level=info msg="CreateContainer within sandbox \"d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 04:49:09.260364 containerd[1545]: time="2025-09-10T04:49:09.259035137Z" level=info msg="Container dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:09.266108 containerd[1545]: time="2025-09-10T04:49:09.266060072Z" level=info msg="CreateContainer within sandbox \"d0aa9c945e3ed58fc5ba39f27c1e0da238242252aa16839f71cb31f40fc6573b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84\"" Sep 10 04:49:09.266579 containerd[1545]: time="2025-09-10T04:49:09.266536326Z" level=info msg="StartContainer for \"dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84\"" Sep 10 04:49:09.267343 containerd[1545]: time="2025-09-10T04:49:09.267306578Z" level=info msg="connecting to shim dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84" address="unix:///run/containerd/s/5f8aa4f76ee1353e5ea4e6f355b9522a18e6b501a6fbf0ed6628ab8e4fc655ba" protocol=ttrpc version=3 Sep 10 04:49:09.294721 systemd[1]: Started cri-containerd-dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84.scope - libcontainer container dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84. Sep 10 04:49:09.322342 containerd[1545]: time="2025-09-10T04:49:09.322271477Z" level=info msg="StartContainer for \"dc487f3730cc2e046085b0072cce44c77f5e0f951777252c1a74e3c8bf4e6e84\" returns successfully" Sep 10 04:49:10.312834 kubelet[2659]: I0910 04:49:10.312775 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-hptv5" podStartSLOduration=1.542087525 podStartE2EDuration="3.312746558s" podCreationTimestamp="2025-09-10 04:49:07 +0000 UTC" firstStartedPulling="2025-09-10 04:49:07.477102083 +0000 UTC m=+7.283053284" lastFinishedPulling="2025-09-10 04:49:09.247761156 +0000 UTC m=+9.053712317" observedRunningTime="2025-09-10 04:49:10.312645411 +0000 UTC m=+10.118596572" watchObservedRunningTime="2025-09-10 04:49:10.312746558 +0000 UTC m=+10.118697999" Sep 10 04:49:14.531805 sudo[1747]: pam_unix(sudo:session): session closed for user root Sep 10 04:49:14.534428 sshd[1746]: Connection closed by 10.0.0.1 port 57242 Sep 10 04:49:14.533328 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:14.536635 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:57242.service: Deactivated successfully. Sep 10 04:49:14.540415 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 04:49:14.540785 systemd[1]: session-7.scope: Consumed 6.038s CPU time, 219.2M memory peak. Sep 10 04:49:14.541763 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Sep 10 04:49:14.543113 systemd-logind[1516]: Removed session 7. Sep 10 04:49:17.577662 update_engine[1522]: I20250910 04:49:17.577584 1522 update_attempter.cc:509] Updating boot flags... Sep 10 04:49:19.782643 systemd[1]: Created slice kubepods-besteffort-pod20d8a4b5_0d72_47a5_880a_c167ad1052dd.slice - libcontainer container kubepods-besteffort-pod20d8a4b5_0d72_47a5_880a_c167ad1052dd.slice. Sep 10 04:49:19.891670 kubelet[2659]: I0910 04:49:19.891622 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/20d8a4b5-0d72-47a5-880a-c167ad1052dd-typha-certs\") pod \"calico-typha-b4bc6496f-vs2qv\" (UID: \"20d8a4b5-0d72-47a5-880a-c167ad1052dd\") " pod="calico-system/calico-typha-b4bc6496f-vs2qv" Sep 10 04:49:19.891670 kubelet[2659]: I0910 04:49:19.891672 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d8a4b5-0d72-47a5-880a-c167ad1052dd-tigera-ca-bundle\") pod \"calico-typha-b4bc6496f-vs2qv\" (UID: \"20d8a4b5-0d72-47a5-880a-c167ad1052dd\") " pod="calico-system/calico-typha-b4bc6496f-vs2qv" Sep 10 04:49:19.892057 kubelet[2659]: I0910 04:49:19.891693 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wvp\" (UniqueName: \"kubernetes.io/projected/20d8a4b5-0d72-47a5-880a-c167ad1052dd-kube-api-access-27wvp\") pod \"calico-typha-b4bc6496f-vs2qv\" (UID: \"20d8a4b5-0d72-47a5-880a-c167ad1052dd\") " pod="calico-system/calico-typha-b4bc6496f-vs2qv" Sep 10 04:49:19.956038 systemd[1]: Created slice kubepods-besteffort-podccf007a0_eeb6_4b37_9d86_e8f63bc9122e.slice - libcontainer container kubepods-besteffort-podccf007a0_eeb6_4b37_9d86_e8f63bc9122e.slice. Sep 10 04:49:20.089854 containerd[1545]: time="2025-09-10T04:49:20.089714699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4bc6496f-vs2qv,Uid:20d8a4b5-0d72-47a5-880a-c167ad1052dd,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:20.095079 kubelet[2659]: I0910 04:49:20.095019 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcp2\" (UniqueName: \"kubernetes.io/projected/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-kube-api-access-2pcp2\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095266 kubelet[2659]: I0910 04:49:20.095062 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-cni-bin-dir\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095266 kubelet[2659]: I0910 04:49:20.095212 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-cni-log-dir\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095266 kubelet[2659]: I0910 04:49:20.095241 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-node-certs\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095420 kubelet[2659]: I0910 04:49:20.095405 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-cni-net-dir\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095663 kubelet[2659]: I0910 04:49:20.095469 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-lib-modules\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095663 kubelet[2659]: I0910 04:49:20.095488 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-var-lib-calico\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095663 kubelet[2659]: I0910 04:49:20.095504 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-tigera-ca-bundle\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095663 kubelet[2659]: I0910 04:49:20.095532 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-var-run-calico\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095663 kubelet[2659]: I0910 04:49:20.095565 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-flexvol-driver-host\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095784 kubelet[2659]: I0910 04:49:20.095582 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-xtables-lock\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.095784 kubelet[2659]: I0910 04:49:20.095602 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ccf007a0-eeb6-4b37-9d86-e8f63bc9122e-policysync\") pod \"calico-node-d72zp\" (UID: \"ccf007a0-eeb6-4b37-9d86-e8f63bc9122e\") " pod="calico-system/calico-node-d72zp" Sep 10 04:49:20.129058 containerd[1545]: time="2025-09-10T04:49:20.128699054Z" level=info msg="connecting to shim f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc" address="unix:///run/containerd/s/166ddffc505522baca470eb174fb5d8952ebd5730b5d69071340c8330cd54f9d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:20.182693 systemd[1]: Started cri-containerd-f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc.scope - libcontainer container f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc. Sep 10 04:49:20.236301 containerd[1545]: time="2025-09-10T04:49:20.236021422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b4bc6496f-vs2qv,Uid:20d8a4b5-0d72-47a5-880a-c167ad1052dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc\"" Sep 10 04:49:20.242995 containerd[1545]: time="2025-09-10T04:49:20.242919187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 04:49:20.248628 kubelet[2659]: E0910 04:49:20.248585 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t2bk" podUID="b2a6a745-401d-495a-b87c-90f62d69bda1" Sep 10 04:49:20.259177 containerd[1545]: time="2025-09-10T04:49:20.259143229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d72zp,Uid:ccf007a0-eeb6-4b37-9d86-e8f63bc9122e,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:20.296675 containerd[1545]: time="2025-09-10T04:49:20.295126231Z" level=info msg="connecting to shim 2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080" address="unix:///run/containerd/s/330cc93e716fb58d97696a301188e639be9273ad281b30687243bace7bce554e" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:20.297418 kubelet[2659]: E0910 04:49:20.297290 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.297418 kubelet[2659]: W0910 04:49:20.297312 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.297418 kubelet[2659]: E0910 04:49:20.297340 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.299861 kubelet[2659]: E0910 04:49:20.299704 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.299861 kubelet[2659]: W0910 04:49:20.299725 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.299861 kubelet[2659]: E0910 04:49:20.299743 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.300178 kubelet[2659]: E0910 04:49:20.300047 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.300178 kubelet[2659]: W0910 04:49:20.300060 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.300178 kubelet[2659]: E0910 04:49:20.300070 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.300801 kubelet[2659]: E0910 04:49:20.300372 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.300801 kubelet[2659]: W0910 04:49:20.300384 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.300801 kubelet[2659]: E0910 04:49:20.300394 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.301755 kubelet[2659]: E0910 04:49:20.301620 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.301755 kubelet[2659]: W0910 04:49:20.301646 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.301755 kubelet[2659]: E0910 04:49:20.301661 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.302022 kubelet[2659]: E0910 04:49:20.301916 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.302022 kubelet[2659]: W0910 04:49:20.301928 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.302022 kubelet[2659]: E0910 04:49:20.301938 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.302745 kubelet[2659]: E0910 04:49:20.302609 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.302745 kubelet[2659]: W0910 04:49:20.302624 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.302745 kubelet[2659]: E0910 04:49:20.302646 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.303050 kubelet[2659]: E0910 04:49:20.302900 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.303050 kubelet[2659]: W0910 04:49:20.302912 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.303050 kubelet[2659]: E0910 04:49:20.302922 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.303736 kubelet[2659]: E0910 04:49:20.303596 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.303736 kubelet[2659]: W0910 04:49:20.303613 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.303736 kubelet[2659]: E0910 04:49:20.303624 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.304048 kubelet[2659]: E0910 04:49:20.303896 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.304048 kubelet[2659]: W0910 04:49:20.303908 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.304048 kubelet[2659]: E0910 04:49:20.303918 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.305770 kubelet[2659]: E0910 04:49:20.305626 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.305770 kubelet[2659]: W0910 04:49:20.305653 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.305770 kubelet[2659]: E0910 04:49:20.305664 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.305936 kubelet[2659]: E0910 04:49:20.305926 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.306008 kubelet[2659]: W0910 04:49:20.305984 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.306080 kubelet[2659]: E0910 04:49:20.306051 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.306327 kubelet[2659]: E0910 04:49:20.306315 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.306442 kubelet[2659]: W0910 04:49:20.306391 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.306442 kubelet[2659]: E0910 04:49:20.306406 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.306685 kubelet[2659]: E0910 04:49:20.306672 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.306776 kubelet[2659]: W0910 04:49:20.306764 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.306825 kubelet[2659]: E0910 04:49:20.306815 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.307436 kubelet[2659]: E0910 04:49:20.307419 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.307660 kubelet[2659]: W0910 04:49:20.307637 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.308004 kubelet[2659]: E0910 04:49:20.307729 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.308253 kubelet[2659]: E0910 04:49:20.308238 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.308420 kubelet[2659]: W0910 04:49:20.308326 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.308420 kubelet[2659]: E0910 04:49:20.308343 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.308728 kubelet[2659]: E0910 04:49:20.308714 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.308806 kubelet[2659]: W0910 04:49:20.308794 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.308856 kubelet[2659]: E0910 04:49:20.308845 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.309775 kubelet[2659]: E0910 04:49:20.309670 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.309775 kubelet[2659]: W0910 04:49:20.309683 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.309775 kubelet[2659]: E0910 04:49:20.309694 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.310013 kubelet[2659]: E0910 04:49:20.309950 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.310013 kubelet[2659]: W0910 04:49:20.309962 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.310013 kubelet[2659]: E0910 04:49:20.309973 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.311621 kubelet[2659]: E0910 04:49:20.311607 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.311805 kubelet[2659]: W0910 04:49:20.311699 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.311805 kubelet[2659]: E0910 04:49:20.311718 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.312236 kubelet[2659]: E0910 04:49:20.312214 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.312314 kubelet[2659]: W0910 04:49:20.312302 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.312369 kubelet[2659]: E0910 04:49:20.312359 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.312441 kubelet[2659]: I0910 04:49:20.312429 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf6n\" (UniqueName: \"kubernetes.io/projected/b2a6a745-401d-495a-b87c-90f62d69bda1-kube-api-access-jsf6n\") pod \"csi-node-driver-4t2bk\" (UID: \"b2a6a745-401d-495a-b87c-90f62d69bda1\") " pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:20.312942 kubelet[2659]: E0910 04:49:20.312922 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.313067 kubelet[2659]: W0910 04:49:20.313003 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.313067 kubelet[2659]: E0910 04:49:20.313033 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.313827 kubelet[2659]: E0910 04:49:20.313810 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.314037 kubelet[2659]: W0910 04:49:20.313926 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.314037 kubelet[2659]: E0910 04:49:20.313949 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.314621 kubelet[2659]: E0910 04:49:20.314608 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.314706 kubelet[2659]: W0910 04:49:20.314694 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.314866 kubelet[2659]: E0910 04:49:20.314755 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.314866 kubelet[2659]: I0910 04:49:20.314776 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a745-401d-495a-b87c-90f62d69bda1-kubelet-dir\") pod \"csi-node-driver-4t2bk\" (UID: \"b2a6a745-401d-495a-b87c-90f62d69bda1\") " pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:20.315077 kubelet[2659]: E0910 04:49:20.315063 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.315148 kubelet[2659]: W0910 04:49:20.315137 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.315205 kubelet[2659]: E0910 04:49:20.315194 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.315287 kubelet[2659]: I0910 04:49:20.315275 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a745-401d-495a-b87c-90f62d69bda1-registration-dir\") pod \"csi-node-driver-4t2bk\" (UID: \"b2a6a745-401d-495a-b87c-90f62d69bda1\") " pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:20.315782 kubelet[2659]: E0910 04:49:20.315444 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.315782 kubelet[2659]: W0910 04:49:20.315676 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.315782 kubelet[2659]: E0910 04:49:20.315688 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.316087 kubelet[2659]: E0910 04:49:20.315951 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.316087 kubelet[2659]: W0910 04:49:20.315965 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.316087 kubelet[2659]: E0910 04:49:20.315983 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.316322 kubelet[2659]: E0910 04:49:20.316309 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.316588 kubelet[2659]: W0910 04:49:20.316509 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.316588 kubelet[2659]: E0910 04:49:20.316537 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.317260 kubelet[2659]: E0910 04:49:20.317156 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.317260 kubelet[2659]: W0910 04:49:20.317168 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.317344 kubelet[2659]: E0910 04:49:20.317265 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.317574 kubelet[2659]: E0910 04:49:20.317410 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.317574 kubelet[2659]: W0910 04:49:20.317423 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.317574 kubelet[2659]: E0910 04:49:20.317433 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.317574 kubelet[2659]: I0910 04:49:20.317463 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a745-401d-495a-b87c-90f62d69bda1-socket-dir\") pod \"csi-node-driver-4t2bk\" (UID: \"b2a6a745-401d-495a-b87c-90f62d69bda1\") " pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:20.319574 kubelet[2659]: E0910 04:49:20.319434 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.319574 kubelet[2659]: W0910 04:49:20.319450 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.319574 kubelet[2659]: E0910 04:49:20.319471 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.319574 kubelet[2659]: I0910 04:49:20.319490 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b2a6a745-401d-495a-b87c-90f62d69bda1-varrun\") pod \"csi-node-driver-4t2bk\" (UID: \"b2a6a745-401d-495a-b87c-90f62d69bda1\") " pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:20.319947 kubelet[2659]: E0910 04:49:20.319932 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.320053 kubelet[2659]: W0910 04:49:20.320034 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.321658 kubelet[2659]: E0910 04:49:20.321570 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.322340 kubelet[2659]: E0910 04:49:20.322299 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.322340 kubelet[2659]: W0910 04:49:20.322315 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.322532 kubelet[2659]: E0910 04:49:20.322427 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.322858 kubelet[2659]: E0910 04:49:20.322817 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.322858 kubelet[2659]: W0910 04:49:20.322832 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.322858 kubelet[2659]: E0910 04:49:20.322843 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.323350 kubelet[2659]: E0910 04:49:20.323306 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.323350 kubelet[2659]: W0910 04:49:20.323320 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.323350 kubelet[2659]: E0910 04:49:20.323330 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.364702 systemd[1]: Started cri-containerd-2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080.scope - libcontainer container 2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080. Sep 10 04:49:20.389236 containerd[1545]: time="2025-09-10T04:49:20.389188992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d72zp,Uid:ccf007a0-eeb6-4b37-9d86-e8f63bc9122e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\"" Sep 10 04:49:20.420860 kubelet[2659]: E0910 04:49:20.420810 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.420860 kubelet[2659]: W0910 04:49:20.420846 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.420860 kubelet[2659]: E0910 04:49:20.420866 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421107 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.421997 kubelet[2659]: W0910 04:49:20.421115 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421124 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421327 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.421997 kubelet[2659]: W0910 04:49:20.421337 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421346 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421484 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.421997 kubelet[2659]: W0910 04:49:20.421492 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421500 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.421997 kubelet[2659]: E0910 04:49:20.421662 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.422195 kubelet[2659]: W0910 04:49:20.421670 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.421720 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.421902 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.422195 kubelet[2659]: W0910 04:49:20.421911 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.421920 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.422047 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.422195 kubelet[2659]: W0910 04:49:20.422054 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.422061 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.422195 kubelet[2659]: E0910 04:49:20.422179 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.422195 kubelet[2659]: W0910 04:49:20.422186 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.422390 kubelet[2659]: E0910 04:49:20.422192 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.422390 kubelet[2659]: E0910 04:49:20.422349 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.422390 kubelet[2659]: W0910 04:49:20.422357 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.422390 kubelet[2659]: E0910 04:49:20.422365 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.423586 kubelet[2659]: E0910 04:49:20.423566 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.423802 kubelet[2659]: W0910 04:49:20.423657 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.423802 kubelet[2659]: E0910 04:49:20.423688 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.423995 kubelet[2659]: E0910 04:49:20.423908 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.423995 kubelet[2659]: W0910 04:49:20.423920 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.423995 kubelet[2659]: E0910 04:49:20.423968 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.424254 kubelet[2659]: E0910 04:49:20.424147 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.424254 kubelet[2659]: W0910 04:49:20.424162 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.424254 kubelet[2659]: E0910 04:49:20.424201 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.424396 kubelet[2659]: E0910 04:49:20.424383 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.424443 kubelet[2659]: W0910 04:49:20.424432 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.424563 kubelet[2659]: E0910 04:49:20.424528 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.424673 kubelet[2659]: E0910 04:49:20.424661 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.424723 kubelet[2659]: W0910 04:49:20.424712 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.424815 kubelet[2659]: E0910 04:49:20.424797 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.425505 kubelet[2659]: E0910 04:49:20.424959 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.425624 kubelet[2659]: W0910 04:49:20.425607 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.425706 kubelet[2659]: E0910 04:49:20.425693 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.426076 kubelet[2659]: E0910 04:49:20.426057 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.426076 kubelet[2659]: W0910 04:49:20.426073 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.426156 kubelet[2659]: E0910 04:49:20.426089 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.426263 kubelet[2659]: E0910 04:49:20.426248 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.426365 kubelet[2659]: W0910 04:49:20.426274 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.426365 kubelet[2659]: E0910 04:49:20.426309 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.426437 kubelet[2659]: E0910 04:49:20.426410 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.426437 kubelet[2659]: W0910 04:49:20.426435 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.426513 kubelet[2659]: E0910 04:49:20.426486 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.426641 kubelet[2659]: E0910 04:49:20.426628 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.426641 kubelet[2659]: W0910 04:49:20.426640 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.426702 kubelet[2659]: E0910 04:49:20.426676 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.433938 kubelet[2659]: E0910 04:49:20.433915 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.433938 kubelet[2659]: W0910 04:49:20.433935 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.434015 kubelet[2659]: E0910 04:49:20.433957 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.434448 kubelet[2659]: E0910 04:49:20.434429 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.434448 kubelet[2659]: W0910 04:49:20.434447 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.434529 kubelet[2659]: E0910 04:49:20.434463 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.434861 kubelet[2659]: E0910 04:49:20.434830 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.435392 kubelet[2659]: W0910 04:49:20.434851 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.435420 kubelet[2659]: E0910 04:49:20.435405 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.436068 kubelet[2659]: E0910 04:49:20.436049 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.436100 kubelet[2659]: W0910 04:49:20.436069 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.436100 kubelet[2659]: E0910 04:49:20.436090 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.436629 kubelet[2659]: E0910 04:49:20.436610 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.436667 kubelet[2659]: W0910 04:49:20.436627 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.436802 kubelet[2659]: E0910 04:49:20.436775 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.437773 kubelet[2659]: E0910 04:49:20.436816 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.437971 kubelet[2659]: W0910 04:49:20.437843 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.438050 kubelet[2659]: E0910 04:49:20.437933 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:20.438275 kubelet[2659]: E0910 04:49:20.438210 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:20.438379 kubelet[2659]: W0910 04:49:20.438337 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:20.438379 kubelet[2659]: E0910 04:49:20.438356 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:21.127428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813450432.mount: Deactivated successfully. Sep 10 04:49:21.519945 containerd[1545]: time="2025-09-10T04:49:21.519901859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:21.520584 containerd[1545]: time="2025-09-10T04:49:21.520554937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 04:49:21.521419 containerd[1545]: time="2025-09-10T04:49:21.521385724Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:21.523054 containerd[1545]: time="2025-09-10T04:49:21.523026138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:21.523991 containerd[1545]: time="2025-09-10T04:49:21.523961597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.281001734s" Sep 10 04:49:21.524029 containerd[1545]: time="2025-09-10T04:49:21.523996755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 04:49:21.524828 containerd[1545]: time="2025-09-10T04:49:21.524758826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 04:49:21.543623 containerd[1545]: time="2025-09-10T04:49:21.543583890Z" level=info msg="CreateContainer within sandbox \"f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 04:49:21.572819 containerd[1545]: time="2025-09-10T04:49:21.572771085Z" level=info msg="Container 03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:21.590080 containerd[1545]: time="2025-09-10T04:49:21.590025451Z" level=info msg="CreateContainer within sandbox \"f54ae6c9c71fbc795258d5d1495b1a45e2fdc2846374e7c6ec8d7278379f33cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58\"" Sep 10 04:49:21.590907 containerd[1545]: time="2025-09-10T04:49:21.590723446Z" level=info msg="StartContainer for \"03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58\"" Sep 10 04:49:21.592015 containerd[1545]: time="2025-09-10T04:49:21.591964126Z" level=info msg="connecting to shim 03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58" address="unix:///run/containerd/s/166ddffc505522baca470eb174fb5d8952ebd5730b5d69071340c8330cd54f9d" protocol=ttrpc version=3 Sep 10 04:49:21.608699 systemd[1]: Started cri-containerd-03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58.scope - libcontainer container 03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58. Sep 10 04:49:21.642090 containerd[1545]: time="2025-09-10T04:49:21.642053252Z" level=info msg="StartContainer for \"03372326632bb26751d0fb64f0c1e51a31e09c74016deb4dc06ac217365e8e58\" returns successfully" Sep 10 04:49:22.264223 kubelet[2659]: E0910 04:49:22.264161 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t2bk" podUID="b2a6a745-401d-495a-b87c-90f62d69bda1" Sep 10 04:49:22.359242 kubelet[2659]: I0910 04:49:22.359048 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b4bc6496f-vs2qv" podStartSLOduration=2.075053908 podStartE2EDuration="3.35902928s" podCreationTimestamp="2025-09-10 04:49:19 +0000 UTC" firstStartedPulling="2025-09-10 04:49:20.240662102 +0000 UTC m=+20.046613303" lastFinishedPulling="2025-09-10 04:49:21.524637474 +0000 UTC m=+21.330588675" observedRunningTime="2025-09-10 04:49:22.357987383 +0000 UTC m=+22.163938584" watchObservedRunningTime="2025-09-10 04:49:22.35902928 +0000 UTC m=+22.164980481" Sep 10 04:49:22.425913 kubelet[2659]: E0910 04:49:22.425884 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.425913 kubelet[2659]: W0910 04:49:22.425907 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426054 kubelet[2659]: E0910 04:49:22.425939 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426076 kubelet[2659]: E0910 04:49:22.426061 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426076 kubelet[2659]: W0910 04:49:22.426069 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426139 kubelet[2659]: E0910 04:49:22.426077 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426240 kubelet[2659]: E0910 04:49:22.426225 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426240 kubelet[2659]: W0910 04:49:22.426237 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426303 kubelet[2659]: E0910 04:49:22.426257 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426396 kubelet[2659]: E0910 04:49:22.426384 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426396 kubelet[2659]: W0910 04:49:22.426395 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426457 kubelet[2659]: E0910 04:49:22.426404 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426633 kubelet[2659]: E0910 04:49:22.426568 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426633 kubelet[2659]: W0910 04:49:22.426580 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426633 kubelet[2659]: E0910 04:49:22.426589 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426744 kubelet[2659]: E0910 04:49:22.426732 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426744 kubelet[2659]: W0910 04:49:22.426742 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426822 kubelet[2659]: E0910 04:49:22.426750 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.426897 kubelet[2659]: E0910 04:49:22.426886 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.426897 kubelet[2659]: W0910 04:49:22.426896 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.426970 kubelet[2659]: E0910 04:49:22.426904 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.427050 kubelet[2659]: E0910 04:49:22.427038 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.427050 kubelet[2659]: W0910 04:49:22.427048 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.427136 kubelet[2659]: E0910 04:49:22.427055 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.427226 kubelet[2659]: E0910 04:49:22.427212 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.427226 kubelet[2659]: W0910 04:49:22.427224 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.427285 kubelet[2659]: E0910 04:49:22.427233 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.427382 kubelet[2659]: E0910 04:49:22.427370 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.427382 kubelet[2659]: W0910 04:49:22.427381 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.427453 kubelet[2659]: E0910 04:49:22.427389 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.427583 kubelet[2659]: E0910 04:49:22.427523 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.427583 kubelet[2659]: W0910 04:49:22.427533 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.427583 kubelet[2659]: E0910 04:49:22.427581 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.427767 kubelet[2659]: E0910 04:49:22.427748 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.427767 kubelet[2659]: W0910 04:49:22.427758 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.427767 kubelet[2659]: E0910 04:49:22.427766 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.428131 kubelet[2659]: E0910 04:49:22.428102 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.428131 kubelet[2659]: W0910 04:49:22.428116 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.428131 kubelet[2659]: E0910 04:49:22.428133 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.428300 kubelet[2659]: E0910 04:49:22.428274 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.428340 kubelet[2659]: W0910 04:49:22.428300 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.428340 kubelet[2659]: E0910 04:49:22.428311 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.428461 kubelet[2659]: E0910 04:49:22.428448 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.428461 kubelet[2659]: W0910 04:49:22.428459 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.428565 kubelet[2659]: E0910 04:49:22.428467 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.444053 kubelet[2659]: E0910 04:49:22.444031 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.444053 kubelet[2659]: W0910 04:49:22.444048 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.444151 kubelet[2659]: E0910 04:49:22.444062 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.444348 kubelet[2659]: E0910 04:49:22.444322 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.444348 kubelet[2659]: W0910 04:49:22.444334 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.444348 kubelet[2659]: E0910 04:49:22.444348 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.444975 kubelet[2659]: E0910 04:49:22.444959 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.444975 kubelet[2659]: W0910 04:49:22.444974 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.445039 kubelet[2659]: E0910 04:49:22.444992 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.445314 kubelet[2659]: E0910 04:49:22.445297 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.445314 kubelet[2659]: W0910 04:49:22.445312 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.445381 kubelet[2659]: E0910 04:49:22.445329 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.445891 kubelet[2659]: E0910 04:49:22.445873 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.445891 kubelet[2659]: W0910 04:49:22.445888 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.445966 kubelet[2659]: E0910 04:49:22.445904 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.447107 kubelet[2659]: E0910 04:49:22.446824 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.447107 kubelet[2659]: W0910 04:49:22.446837 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.447107 kubelet[2659]: E0910 04:49:22.446867 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.447107 kubelet[2659]: E0910 04:49:22.446993 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.447107 kubelet[2659]: W0910 04:49:22.447016 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.447266 kubelet[2659]: E0910 04:49:22.447175 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.447638 kubelet[2659]: E0910 04:49:22.447616 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.447638 kubelet[2659]: W0910 04:49:22.447633 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.447691 kubelet[2659]: E0910 04:49:22.447650 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.447988 kubelet[2659]: E0910 04:49:22.447971 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.447988 kubelet[2659]: W0910 04:49:22.447987 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.448050 kubelet[2659]: E0910 04:49:22.448005 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.448205 kubelet[2659]: E0910 04:49:22.448193 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.448205 kubelet[2659]: W0910 04:49:22.448204 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.448249 kubelet[2659]: E0910 04:49:22.448217 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.448388 kubelet[2659]: E0910 04:49:22.448373 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.448388 kubelet[2659]: W0910 04:49:22.448387 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.448447 kubelet[2659]: E0910 04:49:22.448413 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.448536 kubelet[2659]: E0910 04:49:22.448514 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.448536 kubelet[2659]: W0910 04:49:22.448525 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.448873 kubelet[2659]: E0910 04:49:22.448622 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.448873 kubelet[2659]: E0910 04:49:22.448810 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.448873 kubelet[2659]: W0910 04:49:22.448818 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.448873 kubelet[2659]: E0910 04:49:22.448833 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.449431 kubelet[2659]: E0910 04:49:22.449413 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.449431 kubelet[2659]: W0910 04:49:22.449429 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.449501 kubelet[2659]: E0910 04:49:22.449444 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.449702 kubelet[2659]: E0910 04:49:22.449691 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.449702 kubelet[2659]: W0910 04:49:22.449702 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.449768 kubelet[2659]: E0910 04:49:22.449715 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.449939 kubelet[2659]: E0910 04:49:22.449927 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.449939 kubelet[2659]: W0910 04:49:22.449938 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.449990 kubelet[2659]: E0910 04:49:22.449967 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.450175 kubelet[2659]: E0910 04:49:22.450162 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.450175 kubelet[2659]: W0910 04:49:22.450172 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.450231 kubelet[2659]: E0910 04:49:22.450185 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.450448 kubelet[2659]: E0910 04:49:22.450406 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:49:22.450448 kubelet[2659]: W0910 04:49:22.450422 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:49:22.450448 kubelet[2659]: E0910 04:49:22.450433 2659 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:49:22.558951 containerd[1545]: time="2025-09-10T04:49:22.558844743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:22.559570 containerd[1545]: time="2025-09-10T04:49:22.559521142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 04:49:22.560630 containerd[1545]: time="2025-09-10T04:49:22.560601997Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:22.562676 containerd[1545]: time="2025-09-10T04:49:22.562479043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:22.563297 containerd[1545]: time="2025-09-10T04:49:22.563254396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.038468293s" Sep 10 04:49:22.563297 containerd[1545]: time="2025-09-10T04:49:22.563289074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 04:49:22.565768 containerd[1545]: time="2025-09-10T04:49:22.565729206Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 04:49:22.574589 containerd[1545]: time="2025-09-10T04:49:22.573676845Z" level=info msg="Container d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:22.580364 containerd[1545]: time="2025-09-10T04:49:22.580330802Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\"" Sep 10 04:49:22.581268 containerd[1545]: time="2025-09-10T04:49:22.581219829Z" level=info msg="StartContainer for \"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\"" Sep 10 04:49:22.584008 containerd[1545]: time="2025-09-10T04:49:22.583978342Z" level=info msg="connecting to shim d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620" address="unix:///run/containerd/s/330cc93e716fb58d97696a301188e639be9273ad281b30687243bace7bce554e" protocol=ttrpc version=3 Sep 10 04:49:22.618840 systemd[1]: Started cri-containerd-d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620.scope - libcontainer container d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620. Sep 10 04:49:22.655265 containerd[1545]: time="2025-09-10T04:49:22.655218269Z" level=info msg="StartContainer for \"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\" returns successfully" Sep 10 04:49:22.675575 systemd[1]: cri-containerd-d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620.scope: Deactivated successfully. Sep 10 04:49:22.676647 systemd[1]: cri-containerd-d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620.scope: Consumed 29ms CPU time, 6.2M memory peak, 4.5M written to disk. Sep 10 04:49:22.712184 containerd[1545]: time="2025-09-10T04:49:22.712122304Z" level=info msg="received exit event container_id:\"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\" id:\"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\" pid:3347 exited_at:{seconds:1757479762 nanos:705758969}" Sep 10 04:49:22.712326 containerd[1545]: time="2025-09-10T04:49:22.712193220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\" id:\"d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620\" pid:3347 exited_at:{seconds:1757479762 nanos:705758969}" Sep 10 04:49:22.742842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9d837ee83f689504e388f8a9f089b4b7775893a96c325abf1e21529a53ff620-rootfs.mount: Deactivated successfully. Sep 10 04:49:23.341708 kubelet[2659]: I0910 04:49:23.341677 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:23.342910 containerd[1545]: time="2025-09-10T04:49:23.342876174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 04:49:24.266458 kubelet[2659]: E0910 04:49:24.265416 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t2bk" podUID="b2a6a745-401d-495a-b87c-90f62d69bda1" Sep 10 04:49:26.027419 containerd[1545]: time="2025-09-10T04:49:26.027361780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:26.027876 containerd[1545]: time="2025-09-10T04:49:26.027829198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 04:49:26.028689 containerd[1545]: time="2025-09-10T04:49:26.028661919Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:26.030319 containerd[1545]: time="2025-09-10T04:49:26.030278764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:26.031460 containerd[1545]: time="2025-09-10T04:49:26.031418470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.688505578s" Sep 10 04:49:26.031608 containerd[1545]: time="2025-09-10T04:49:26.031588063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 04:49:26.035746 containerd[1545]: time="2025-09-10T04:49:26.035706190Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 04:49:26.049528 containerd[1545]: time="2025-09-10T04:49:26.048426035Z" level=info msg="Container 04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:26.056395 containerd[1545]: time="2025-09-10T04:49:26.056348785Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\"" Sep 10 04:49:26.058008 containerd[1545]: time="2025-09-10T04:49:26.056842682Z" level=info msg="StartContainer for \"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\"" Sep 10 04:49:26.059362 containerd[1545]: time="2025-09-10T04:49:26.059326685Z" level=info msg="connecting to shim 04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a" address="unix:///run/containerd/s/330cc93e716fb58d97696a301188e639be9273ad281b30687243bace7bce554e" protocol=ttrpc version=3 Sep 10 04:49:26.078698 systemd[1]: Started cri-containerd-04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a.scope - libcontainer container 04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a. Sep 10 04:49:26.111737 containerd[1545]: time="2025-09-10T04:49:26.111700716Z" level=info msg="StartContainer for \"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\" returns successfully" Sep 10 04:49:26.264957 kubelet[2659]: E0910 04:49:26.264892 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4t2bk" podUID="b2a6a745-401d-495a-b87c-90f62d69bda1" Sep 10 04:49:26.632401 systemd[1]: cri-containerd-04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a.scope: Deactivated successfully. Sep 10 04:49:26.632695 systemd[1]: cri-containerd-04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a.scope: Consumed 455ms CPU time, 174.4M memory peak, 1.9M read from disk, 165.8M written to disk. Sep 10 04:49:26.634504 containerd[1545]: time="2025-09-10T04:49:26.634461990Z" level=info msg="received exit event container_id:\"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\" id:\"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\" pid:3409 exited_at:{seconds:1757479766 nanos:634163084}" Sep 10 04:49:26.634796 containerd[1545]: time="2025-09-10T04:49:26.634641461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\" id:\"04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a\" pid:3409 exited_at:{seconds:1757479766 nanos:634163084}" Sep 10 04:49:26.657279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04fbb23b098f0ec019eed43a4210b534c43d8f47065351d9d28bc21e572b628a-rootfs.mount: Deactivated successfully. Sep 10 04:49:26.705960 kubelet[2659]: I0910 04:49:26.705739 2659 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 04:49:26.820313 systemd[1]: Created slice kubepods-burstable-podf1088372_b201_40dc_b560_c91286a6ca6f.slice - libcontainer container kubepods-burstable-podf1088372_b201_40dc_b560_c91286a6ca6f.slice. Sep 10 04:49:26.826865 systemd[1]: Created slice kubepods-burstable-pod25f6ec00_763a_4872_8e56_095e0486dfa0.slice - libcontainer container kubepods-burstable-pod25f6ec00_763a_4872_8e56_095e0486dfa0.slice. Sep 10 04:49:26.833962 systemd[1]: Created slice kubepods-besteffort-pod2ec4b9f5_5e58_4c69_8de9_b5996a60edb4.slice - libcontainer container kubepods-besteffort-pod2ec4b9f5_5e58_4c69_8de9_b5996a60edb4.slice. Sep 10 04:49:26.841473 systemd[1]: Created slice kubepods-besteffort-pod8e7146a5_9da1_4739_9d4e_3bdfaefcc470.slice - libcontainer container kubepods-besteffort-pod8e7146a5_9da1_4739_9d4e_3bdfaefcc470.slice. Sep 10 04:49:26.846389 systemd[1]: Created slice kubepods-besteffort-podbacb79ca_f68c_4697_83fe_0ef469ca760a.slice - libcontainer container kubepods-besteffort-podbacb79ca_f68c_4697_83fe_0ef469ca760a.slice. Sep 10 04:49:26.853645 systemd[1]: Created slice kubepods-besteffort-podf542c711_ab06_4086_ae8a_ac2e3e8de0f9.slice - libcontainer container kubepods-besteffort-podf542c711_ab06_4086_ae8a_ac2e3e8de0f9.slice. Sep 10 04:49:26.858825 systemd[1]: Created slice kubepods-besteffort-pod834e4678_ed3e_4f5a_8280_030995d342c6.slice - libcontainer container kubepods-besteffort-pod834e4678_ed3e_4f5a_8280_030995d342c6.slice. Sep 10 04:49:26.877774 kubelet[2659]: I0910 04:49:26.875709 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qdb\" (UniqueName: \"kubernetes.io/projected/2ec4b9f5-5e58-4c69-8de9-b5996a60edb4-kube-api-access-q5qdb\") pod \"calico-kube-controllers-675f4f5db4-jgx46\" (UID: \"2ec4b9f5-5e58-4c69-8de9-b5996a60edb4\") " pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" Sep 10 04:49:26.878049 kubelet[2659]: I0910 04:49:26.878009 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25f6ec00-763a-4872-8e56-095e0486dfa0-config-volume\") pod \"coredns-7c65d6cfc9-2zqbm\" (UID: \"25f6ec00-763a-4872-8e56-095e0486dfa0\") " pod="kube-system/coredns-7c65d6cfc9-2zqbm" Sep 10 04:49:26.878091 kubelet[2659]: I0910 04:49:26.878069 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8e7146a5-9da1-4739-9d4e-3bdfaefcc470-goldmane-key-pair\") pod \"goldmane-7988f88666-xtvmh\" (UID: \"8e7146a5-9da1-4739-9d4e-3bdfaefcc470\") " pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:26.878119 kubelet[2659]: I0910 04:49:26.878113 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1088372-b201-40dc-b560-c91286a6ca6f-config-volume\") pod \"coredns-7c65d6cfc9-5ft4d\" (UID: \"f1088372-b201-40dc-b560-c91286a6ca6f\") " pod="kube-system/coredns-7c65d6cfc9-5ft4d" Sep 10 04:49:26.878142 kubelet[2659]: I0910 04:49:26.878136 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-ca-bundle\") pod \"whisker-54f9fcc886-fglj4\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " pod="calico-system/whisker-54f9fcc886-fglj4" Sep 10 04:49:26.878175 kubelet[2659]: I0910 04:49:26.878158 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bacb79ca-f68c-4697-83fe-0ef469ca760a-calico-apiserver-certs\") pod \"calico-apiserver-784d6d7b7f-ddd7v\" (UID: \"bacb79ca-f68c-4697-83fe-0ef469ca760a\") " pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" Sep 10 04:49:26.878197 kubelet[2659]: I0910 04:49:26.878176 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsghk\" (UniqueName: \"kubernetes.io/projected/bacb79ca-f68c-4697-83fe-0ef469ca760a-kube-api-access-wsghk\") pod \"calico-apiserver-784d6d7b7f-ddd7v\" (UID: \"bacb79ca-f68c-4697-83fe-0ef469ca760a\") " pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" Sep 10 04:49:26.878222 kubelet[2659]: I0910 04:49:26.878196 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rswv\" (UniqueName: \"kubernetes.io/projected/f1088372-b201-40dc-b560-c91286a6ca6f-kube-api-access-8rswv\") pod \"coredns-7c65d6cfc9-5ft4d\" (UID: \"f1088372-b201-40dc-b560-c91286a6ca6f\") " pod="kube-system/coredns-7c65d6cfc9-5ft4d" Sep 10 04:49:26.878244 kubelet[2659]: I0910 04:49:26.878220 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndqq\" (UniqueName: \"kubernetes.io/projected/25f6ec00-763a-4872-8e56-095e0486dfa0-kube-api-access-qndqq\") pod \"coredns-7c65d6cfc9-2zqbm\" (UID: \"25f6ec00-763a-4872-8e56-095e0486dfa0\") " pod="kube-system/coredns-7c65d6cfc9-2zqbm" Sep 10 04:49:26.878244 kubelet[2659]: I0910 04:49:26.878241 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkcz\" (UniqueName: \"kubernetes.io/projected/8e7146a5-9da1-4739-9d4e-3bdfaefcc470-kube-api-access-tpkcz\") pod \"goldmane-7988f88666-xtvmh\" (UID: \"8e7146a5-9da1-4739-9d4e-3bdfaefcc470\") " pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:26.878288 kubelet[2659]: I0910 04:49:26.878261 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdl9\" (UniqueName: \"kubernetes.io/projected/834e4678-ed3e-4f5a-8280-030995d342c6-kube-api-access-gvdl9\") pod \"calico-apiserver-784d6d7b7f-gqp8n\" (UID: \"834e4678-ed3e-4f5a-8280-030995d342c6\") " pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" Sep 10 04:49:26.878288 kubelet[2659]: I0910 04:49:26.878279 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/834e4678-ed3e-4f5a-8280-030995d342c6-calico-apiserver-certs\") pod \"calico-apiserver-784d6d7b7f-gqp8n\" (UID: \"834e4678-ed3e-4f5a-8280-030995d342c6\") " pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" Sep 10 04:49:26.878330 kubelet[2659]: I0910 04:49:26.878306 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e7146a5-9da1-4739-9d4e-3bdfaefcc470-goldmane-ca-bundle\") pod \"goldmane-7988f88666-xtvmh\" (UID: \"8e7146a5-9da1-4739-9d4e-3bdfaefcc470\") " pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:26.878352 kubelet[2659]: I0910 04:49:26.878333 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xw4b\" (UniqueName: \"kubernetes.io/projected/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-kube-api-access-5xw4b\") pod \"whisker-54f9fcc886-fglj4\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " pod="calico-system/whisker-54f9fcc886-fglj4" Sep 10 04:49:26.878420 kubelet[2659]: I0910 04:49:26.878355 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e7146a5-9da1-4739-9d4e-3bdfaefcc470-config\") pod \"goldmane-7988f88666-xtvmh\" (UID: \"8e7146a5-9da1-4739-9d4e-3bdfaefcc470\") " pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:26.878420 kubelet[2659]: I0910 04:49:26.878385 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-backend-key-pair\") pod \"whisker-54f9fcc886-fglj4\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " pod="calico-system/whisker-54f9fcc886-fglj4" Sep 10 04:49:26.878420 kubelet[2659]: I0910 04:49:26.878416 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ec4b9f5-5e58-4c69-8de9-b5996a60edb4-tigera-ca-bundle\") pod \"calico-kube-controllers-675f4f5db4-jgx46\" (UID: \"2ec4b9f5-5e58-4c69-8de9-b5996a60edb4\") " pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" Sep 10 04:49:27.130699 containerd[1545]: time="2025-09-10T04:49:27.130662646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zqbm,Uid:25f6ec00-763a-4872-8e56-095e0486dfa0,Namespace:kube-system,Attempt:0,}" Sep 10 04:49:27.135589 containerd[1545]: time="2025-09-10T04:49:27.135432796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5ft4d,Uid:f1088372-b201-40dc-b560-c91286a6ca6f,Namespace:kube-system,Attempt:0,}" Sep 10 04:49:27.139759 containerd[1545]: time="2025-09-10T04:49:27.139729288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675f4f5db4-jgx46,Uid:2ec4b9f5-5e58-4c69-8de9-b5996a60edb4,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:27.145254 containerd[1545]: time="2025-09-10T04:49:27.145216968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xtvmh,Uid:8e7146a5-9da1-4739-9d4e-3bdfaefcc470,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:27.152381 containerd[1545]: time="2025-09-10T04:49:27.152087906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-ddd7v,Uid:bacb79ca-f68c-4697-83fe-0ef469ca760a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:49:27.158698 containerd[1545]: time="2025-09-10T04:49:27.158661338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f9fcc886-fglj4,Uid:f542c711-ab06-4086-ae8a-ac2e3e8de0f9,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:27.162894 containerd[1545]: time="2025-09-10T04:49:27.162855674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-gqp8n,Uid:834e4678-ed3e-4f5a-8280-030995d342c6,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:49:27.251037 containerd[1545]: time="2025-09-10T04:49:27.250968891Z" level=error msg="Failed to destroy network for sandbox \"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.253337 containerd[1545]: time="2025-09-10T04:49:27.253232752Z" level=error msg="Failed to destroy network for sandbox \"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.254714 containerd[1545]: time="2025-09-10T04:49:27.254672649Z" level=error msg="Failed to destroy network for sandbox \"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.256492 containerd[1545]: time="2025-09-10T04:49:27.256441411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675f4f5db4-jgx46,Uid:2ec4b9f5-5e58-4c69-8de9-b5996a60edb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.257599 containerd[1545]: time="2025-09-10T04:49:27.257459807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-ddd7v,Uid:bacb79ca-f68c-4697-83fe-0ef469ca760a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.258394 containerd[1545]: time="2025-09-10T04:49:27.258354847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-gqp8n,Uid:834e4678-ed3e-4f5a-8280-030995d342c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.259496 kubelet[2659]: E0910 04:49:27.259441 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.259626 kubelet[2659]: E0910 04:49:27.259549 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.259626 kubelet[2659]: E0910 04:49:27.259617 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" Sep 10 04:49:27.259676 kubelet[2659]: E0910 04:49:27.259636 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" Sep 10 04:49:27.259676 kubelet[2659]: E0910 04:49:27.259569 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" Sep 10 04:49:27.259725 kubelet[2659]: E0910 04:49:27.259679 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" Sep 10 04:49:27.259725 kubelet[2659]: E0910 04:49:27.259693 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784d6d7b7f-ddd7v_calico-apiserver(bacb79ca-f68c-4697-83fe-0ef469ca760a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784d6d7b7f-ddd7v_calico-apiserver(bacb79ca-f68c-4697-83fe-0ef469ca760a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce55cfdb6820b88c1051bb8a311364345259cc9ce5d5e5cdc081edf28ace61d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" podUID="bacb79ca-f68c-4697-83fe-0ef469ca760a" Sep 10 04:49:27.259788 kubelet[2659]: E0910 04:49:27.259715 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-784d6d7b7f-gqp8n_calico-apiserver(834e4678-ed3e-4f5a-8280-030995d342c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-784d6d7b7f-gqp8n_calico-apiserver(834e4678-ed3e-4f5a-8280-030995d342c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab4905c73dacd780d505b1d01ea2e59de2669054829b19a5864f4f4df1f14d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" podUID="834e4678-ed3e-4f5a-8280-030995d342c6" Sep 10 04:49:27.259949 kubelet[2659]: E0910 04:49:27.259870 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.260366 kubelet[2659]: E0910 04:49:27.259911 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" Sep 10 04:49:27.260417 kubelet[2659]: E0910 04:49:27.260374 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" Sep 10 04:49:27.260442 kubelet[2659]: E0910 04:49:27.260418 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-675f4f5db4-jgx46_calico-system(2ec4b9f5-5e58-4c69-8de9-b5996a60edb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-675f4f5db4-jgx46_calico-system(2ec4b9f5-5e58-4c69-8de9-b5996a60edb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7eb31e6a5aa5ab464f7018dab35bc2378c5fb615b1475dc91206bd23df05e5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" podUID="2ec4b9f5-5e58-4c69-8de9-b5996a60edb4" Sep 10 04:49:27.274473 containerd[1545]: time="2025-09-10T04:49:27.273904166Z" level=error msg="Failed to destroy network for sandbox \"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.275967 containerd[1545]: time="2025-09-10T04:49:27.275829521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xtvmh,Uid:8e7146a5-9da1-4739-9d4e-3bdfaefcc470,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.276319 kubelet[2659]: E0910 04:49:27.276279 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.276622 kubelet[2659]: E0910 04:49:27.276344 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:27.276622 kubelet[2659]: E0910 04:49:27.276363 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-xtvmh" Sep 10 04:49:27.276622 kubelet[2659]: E0910 04:49:27.276405 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-xtvmh_calico-system(8e7146a5-9da1-4739-9d4e-3bdfaefcc470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-xtvmh_calico-system(8e7146a5-9da1-4739-9d4e-3bdfaefcc470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cf3c1aba5838c617a5264b6a39ae25194f644480603feec65cd00b3fe70d5f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-xtvmh" podUID="8e7146a5-9da1-4739-9d4e-3bdfaefcc470" Sep 10 04:49:27.281687 containerd[1545]: time="2025-09-10T04:49:27.281649226Z" level=error msg="Failed to destroy network for sandbox \"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.281942 containerd[1545]: time="2025-09-10T04:49:27.281908855Z" level=error msg="Failed to destroy network for sandbox \"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.282418 containerd[1545]: time="2025-09-10T04:49:27.282390554Z" level=error msg="Failed to destroy network for sandbox \"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.282978 containerd[1545]: time="2025-09-10T04:49:27.282853693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zqbm,Uid:25f6ec00-763a-4872-8e56-095e0486dfa0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.283208 kubelet[2659]: E0910 04:49:27.283176 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.283257 kubelet[2659]: E0910 04:49:27.283238 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2zqbm" Sep 10 04:49:27.283289 kubelet[2659]: E0910 04:49:27.283257 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2zqbm" Sep 10 04:49:27.283536 kubelet[2659]: E0910 04:49:27.283507 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2zqbm_kube-system(25f6ec00-763a-4872-8e56-095e0486dfa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2zqbm_kube-system(25f6ec00-763a-4872-8e56-095e0486dfa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ad1e94e9d35641153fb5870670601ee91ca7641c3e5fae5fb3a816f1bd48ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2zqbm" podUID="25f6ec00-763a-4872-8e56-095e0486dfa0" Sep 10 04:49:27.283857 containerd[1545]: time="2025-09-10T04:49:27.283821491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f9fcc886-fglj4,Uid:f542c711-ab06-4086-ae8a-ac2e3e8de0f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.284007 kubelet[2659]: E0910 04:49:27.283974 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.284091 kubelet[2659]: E0910 04:49:27.284050 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54f9fcc886-fglj4" Sep 10 04:49:27.284091 kubelet[2659]: E0910 04:49:27.284067 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54f9fcc886-fglj4" Sep 10 04:49:27.284231 kubelet[2659]: E0910 04:49:27.284138 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54f9fcc886-fglj4_calico-system(f542c711-ab06-4086-ae8a-ac2e3e8de0f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54f9fcc886-fglj4_calico-system(f542c711-ab06-4086-ae8a-ac2e3e8de0f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bd3eae20570d270d6da41e9de4fb502131c76e366bbeaf34292ac12cf581bc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54f9fcc886-fglj4" podUID="f542c711-ab06-4086-ae8a-ac2e3e8de0f9" Sep 10 04:49:27.284692 containerd[1545]: time="2025-09-10T04:49:27.284649015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5ft4d,Uid:f1088372-b201-40dc-b560-c91286a6ca6f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.284943 kubelet[2659]: E0910 04:49:27.284920 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:27.285012 kubelet[2659]: E0910 04:49:27.284954 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5ft4d" Sep 10 04:49:27.285012 kubelet[2659]: E0910 04:49:27.284969 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5ft4d" Sep 10 04:49:27.285012 kubelet[2659]: E0910 04:49:27.284996 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5ft4d_kube-system(f1088372-b201-40dc-b560-c91286a6ca6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5ft4d_kube-system(f1088372-b201-40dc-b560-c91286a6ca6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2c7699b42f072b6384884f6ac1add618080a7be9fe3e472d42e62d5d217a5eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5ft4d" podUID="f1088372-b201-40dc-b560-c91286a6ca6f" Sep 10 04:49:27.363563 containerd[1545]: time="2025-09-10T04:49:27.363407002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 04:49:28.047614 systemd[1]: run-netns-cni\x2da074a426\x2db408\x2da6c8\x2db10b\x2d8f27f72adad1.mount: Deactivated successfully. Sep 10 04:49:28.047701 systemd[1]: run-netns-cni\x2dc35a586f\x2da75d\x2db594\x2d5533\x2db4fe58485668.mount: Deactivated successfully. Sep 10 04:49:28.047756 systemd[1]: run-netns-cni\x2d4c6e89c0\x2d02f8\x2dffd4\x2d17fb\x2d5897a6804bc3.mount: Deactivated successfully. Sep 10 04:49:28.047812 systemd[1]: run-netns-cni\x2dad22e395\x2d2aff\x2dde0b\x2d7f78\x2dcfa324e39f87.mount: Deactivated successfully. Sep 10 04:49:28.269389 systemd[1]: Created slice kubepods-besteffort-podb2a6a745_401d_495a_b87c_90f62d69bda1.slice - libcontainer container kubepods-besteffort-podb2a6a745_401d_495a_b87c_90f62d69bda1.slice. Sep 10 04:49:28.274031 containerd[1545]: time="2025-09-10T04:49:28.273999294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t2bk,Uid:b2a6a745-401d-495a-b87c-90f62d69bda1,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:28.353083 containerd[1545]: time="2025-09-10T04:49:28.351249039Z" level=error msg="Failed to destroy network for sandbox \"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:28.353553 systemd[1]: run-netns-cni\x2dba17ffb5\x2ddb2f\x2dfa23\x2dd650\x2d51a1a2dfb911.mount: Deactivated successfully. Sep 10 04:49:28.420643 containerd[1545]: time="2025-09-10T04:49:28.419916777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t2bk,Uid:b2a6a745-401d-495a-b87c-90f62d69bda1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:28.420824 kubelet[2659]: E0910 04:49:28.420135 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:49:28.420824 kubelet[2659]: E0910 04:49:28.420187 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:28.420824 kubelet[2659]: E0910 04:49:28.420212 2659 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4t2bk" Sep 10 04:49:28.424257 kubelet[2659]: E0910 04:49:28.420248 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4t2bk_calico-system(b2a6a745-401d-495a-b87c-90f62d69bda1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4t2bk_calico-system(b2a6a745-401d-495a-b87c-90f62d69bda1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4f008de6039c945a55fc50c5fe3192114c86cb6da8e61f3be49ba1c6de51959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4t2bk" podUID="b2a6a745-401d-495a-b87c-90f62d69bda1" Sep 10 04:49:30.181044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991446917.mount: Deactivated successfully. Sep 10 04:49:30.410607 containerd[1545]: time="2025-09-10T04:49:30.410534690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:30.411762 containerd[1545]: time="2025-09-10T04:49:30.411722287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 04:49:30.412582 containerd[1545]: time="2025-09-10T04:49:30.412533258Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:30.414790 containerd[1545]: time="2025-09-10T04:49:30.414751578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:30.415826 containerd[1545]: time="2025-09-10T04:49:30.415791100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.052210745s" Sep 10 04:49:30.415826 containerd[1545]: time="2025-09-10T04:49:30.415824179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 04:49:30.424632 containerd[1545]: time="2025-09-10T04:49:30.422366583Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 04:49:30.441637 containerd[1545]: time="2025-09-10T04:49:30.440292535Z" level=info msg="Container d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:30.449652 containerd[1545]: time="2025-09-10T04:49:30.449604599Z" level=info msg="CreateContainer within sandbox \"2ebb46bf84b8dd5f63f4308b6050997262ccfe43bf15f7b7bb228896eb6a9080\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\"" Sep 10 04:49:30.450290 containerd[1545]: time="2025-09-10T04:49:30.450177378Z" level=info msg="StartContainer for \"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\"" Sep 10 04:49:30.452977 containerd[1545]: time="2025-09-10T04:49:30.452949638Z" level=info msg="connecting to shim d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25" address="unix:///run/containerd/s/330cc93e716fb58d97696a301188e639be9273ad281b30687243bace7bce554e" protocol=ttrpc version=3 Sep 10 04:49:30.469723 systemd[1]: Started cri-containerd-d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25.scope - libcontainer container d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25. Sep 10 04:49:30.507203 containerd[1545]: time="2025-09-10T04:49:30.507167199Z" level=info msg="StartContainer for \"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\" returns successfully" Sep 10 04:49:30.642407 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 04:49:30.642509 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 04:49:30.803796 kubelet[2659]: I0910 04:49:30.803468 2659 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-ca-bundle\") pod \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " Sep 10 04:49:30.803796 kubelet[2659]: I0910 04:49:30.803519 2659 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xw4b\" (UniqueName: \"kubernetes.io/projected/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-kube-api-access-5xw4b\") pod \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " Sep 10 04:49:30.803796 kubelet[2659]: I0910 04:49:30.803558 2659 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-backend-key-pair\") pod \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\" (UID: \"f542c711-ab06-4086-ae8a-ac2e3e8de0f9\") " Sep 10 04:49:30.813998 kubelet[2659]: I0910 04:49:30.813955 2659 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f542c711-ab06-4086-ae8a-ac2e3e8de0f9" (UID: "f542c711-ab06-4086-ae8a-ac2e3e8de0f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 04:49:30.815310 kubelet[2659]: I0910 04:49:30.815240 2659 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f542c711-ab06-4086-ae8a-ac2e3e8de0f9" (UID: "f542c711-ab06-4086-ae8a-ac2e3e8de0f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 04:49:30.815787 kubelet[2659]: I0910 04:49:30.815757 2659 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-kube-api-access-5xw4b" (OuterVolumeSpecName: "kube-api-access-5xw4b") pod "f542c711-ab06-4086-ae8a-ac2e3e8de0f9" (UID: "f542c711-ab06-4086-ae8a-ac2e3e8de0f9"). InnerVolumeSpecName "kube-api-access-5xw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 04:49:30.903815 kubelet[2659]: I0910 04:49:30.903768 2659 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 04:49:30.903815 kubelet[2659]: I0910 04:49:30.903800 2659 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xw4b\" (UniqueName: \"kubernetes.io/projected/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-kube-api-access-5xw4b\") on node \"localhost\" DevicePath \"\"" Sep 10 04:49:30.903815 kubelet[2659]: I0910 04:49:30.903809 2659 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f542c711-ab06-4086-ae8a-ac2e3e8de0f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 04:49:31.181824 systemd[1]: var-lib-kubelet-pods-f542c711\x2dab06\x2d4086\x2dae8a\x2dac2e3e8de0f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xw4b.mount: Deactivated successfully. Sep 10 04:49:31.182209 systemd[1]: var-lib-kubelet-pods-f542c711\x2dab06\x2d4086\x2dae8a\x2dac2e3e8de0f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 04:49:31.378016 systemd[1]: Removed slice kubepods-besteffort-podf542c711_ab06_4086_ae8a_ac2e3e8de0f9.slice - libcontainer container kubepods-besteffort-podf542c711_ab06_4086_ae8a_ac2e3e8de0f9.slice. Sep 10 04:49:31.388686 kubelet[2659]: I0910 04:49:31.388615 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d72zp" podStartSLOduration=2.362591346 podStartE2EDuration="12.388596474s" podCreationTimestamp="2025-09-10 04:49:19 +0000 UTC" firstStartedPulling="2025-09-10 04:49:20.390327353 +0000 UTC m=+20.196278554" lastFinishedPulling="2025-09-10 04:49:30.416332481 +0000 UTC m=+30.222283682" observedRunningTime="2025-09-10 04:49:31.387877138 +0000 UTC m=+31.193828339" watchObservedRunningTime="2025-09-10 04:49:31.388596474 +0000 UTC m=+31.194547715" Sep 10 04:49:31.434453 systemd[1]: Created slice kubepods-besteffort-pod4749a762_e220_412a_b1ac_fded4f1c6cdd.slice - libcontainer container kubepods-besteffort-pod4749a762_e220_412a_b1ac_fded4f1c6cdd.slice. Sep 10 04:49:31.507309 kubelet[2659]: I0910 04:49:31.507266 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4749a762-e220-412a-b1ac-fded4f1c6cdd-whisker-ca-bundle\") pod \"whisker-5d4866c8df-grvdx\" (UID: \"4749a762-e220-412a-b1ac-fded4f1c6cdd\") " pod="calico-system/whisker-5d4866c8df-grvdx" Sep 10 04:49:31.507601 kubelet[2659]: I0910 04:49:31.507495 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4749a762-e220-412a-b1ac-fded4f1c6cdd-whisker-backend-key-pair\") pod \"whisker-5d4866c8df-grvdx\" (UID: \"4749a762-e220-412a-b1ac-fded4f1c6cdd\") " pod="calico-system/whisker-5d4866c8df-grvdx" Sep 10 04:49:31.507601 kubelet[2659]: I0910 04:49:31.507525 2659 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrx7\" (UniqueName: \"kubernetes.io/projected/4749a762-e220-412a-b1ac-fded4f1c6cdd-kube-api-access-lcrx7\") pod \"whisker-5d4866c8df-grvdx\" (UID: \"4749a762-e220-412a-b1ac-fded4f1c6cdd\") " pod="calico-system/whisker-5d4866c8df-grvdx" Sep 10 04:49:31.739761 containerd[1545]: time="2025-09-10T04:49:31.739643625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4866c8df-grvdx,Uid:4749a762-e220-412a-b1ac-fded4f1c6cdd,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:31.910146 systemd-networkd[1438]: cali3b4991551d7: Link UP Sep 10 04:49:31.911116 systemd-networkd[1438]: cali3b4991551d7: Gained carrier Sep 10 04:49:31.928406 containerd[1545]: 2025-09-10 04:49:31.761 [INFO][3792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 04:49:31.928406 containerd[1545]: 2025-09-10 04:49:31.790 [INFO][3792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d4866c8df--grvdx-eth0 whisker-5d4866c8df- calico-system 4749a762-e220-412a-b1ac-fded4f1c6cdd 883 0 2025-09-10 04:49:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d4866c8df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d4866c8df-grvdx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3b4991551d7 [] [] }} ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-" Sep 10 04:49:31.928406 containerd[1545]: 2025-09-10 04:49:31.790 [INFO][3792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.928406 containerd[1545]: 2025-09-10 04:49:31.846 [INFO][3806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" HandleID="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Workload="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.846 [INFO][3806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" HandleID="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Workload="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d4866c8df-grvdx", "timestamp":"2025-09-10 04:49:31.846718039 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.846 [INFO][3806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.846 [INFO][3806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.847 [INFO][3806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.857 [INFO][3806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" host="localhost" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.867 [INFO][3806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.873 [INFO][3806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.875 [INFO][3806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.877 [INFO][3806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:31.928625 containerd[1545]: 2025-09-10 04:49:31.877 [INFO][3806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" host="localhost" Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.879 [INFO][3806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.884 [INFO][3806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" host="localhost" Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.895 [INFO][3806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" host="localhost" Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.895 [INFO][3806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" host="localhost" Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.895 [INFO][3806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:31.928845 containerd[1545]: 2025-09-10 04:49:31.895 [INFO][3806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" HandleID="k8s-pod-network.22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Workload="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.928957 containerd[1545]: 2025-09-10 04:49:31.899 [INFO][3792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4866c8df--grvdx-eth0", GenerateName:"whisker-5d4866c8df-", Namespace:"calico-system", SelfLink:"", UID:"4749a762-e220-412a-b1ac-fded4f1c6cdd", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4866c8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d4866c8df-grvdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b4991551d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:31.928957 containerd[1545]: 2025-09-10 04:49:31.900 [INFO][3792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.929027 containerd[1545]: 2025-09-10 04:49:31.900 [INFO][3792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b4991551d7 ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.929027 containerd[1545]: 2025-09-10 04:49:31.912 [INFO][3792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:31.929063 containerd[1545]: 2025-09-10 04:49:31.913 [INFO][3792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4866c8df--grvdx-eth0", GenerateName:"whisker-5d4866c8df-", Namespace:"calico-system", SelfLink:"", UID:"4749a762-e220-412a-b1ac-fded4f1c6cdd", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4866c8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def", Pod:"whisker-5d4866c8df-grvdx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b4991551d7", MAC:"16:2c:5d:fd:54:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:31.929111 containerd[1545]: 2025-09-10 04:49:31.924 [INFO][3792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" Namespace="calico-system" Pod="whisker-5d4866c8df-grvdx" WorkloadEndpoint="localhost-k8s-whisker--5d4866c8df--grvdx-eth0" Sep 10 04:49:32.012718 containerd[1545]: time="2025-09-10T04:49:32.012576848Z" level=info msg="connecting to shim 22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def" address="unix:///run/containerd/s/3d349fea4ff1a115b2a124825f0024d1ebf742e0a29f0e204235ac07170c9878" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:32.065271 systemd[1]: Started cri-containerd-22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def.scope - libcontainer container 22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def. Sep 10 04:49:32.087963 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:32.155881 containerd[1545]: time="2025-09-10T04:49:32.155825459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4866c8df-grvdx,Uid:4749a762-e220-412a-b1ac-fded4f1c6cdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def\"" Sep 10 04:49:32.157252 containerd[1545]: time="2025-09-10T04:49:32.157215815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 04:49:32.266369 kubelet[2659]: I0910 04:49:32.266262 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f542c711-ab06-4086-ae8a-ac2e3e8de0f9" path="/var/lib/kubelet/pods/f542c711-ab06-4086-ae8a-ac2e3e8de0f9/volumes" Sep 10 04:49:32.318953 kubelet[2659]: I0910 04:49:32.318905 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:32.374833 kubelet[2659]: I0910 04:49:32.374792 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:32.976505 containerd[1545]: time="2025-09-10T04:49:32.976461125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 04:49:32.980164 containerd[1545]: time="2025-09-10T04:49:32.980125649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 822.875514ms" Sep 10 04:49:32.980164 containerd[1545]: time="2025-09-10T04:49:32.980164287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 04:49:32.980444 containerd[1545]: time="2025-09-10T04:49:32.980330642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:32.991349 containerd[1545]: time="2025-09-10T04:49:32.991167098Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:32.992802 containerd[1545]: time="2025-09-10T04:49:32.992769807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:32.993465 containerd[1545]: time="2025-09-10T04:49:32.993433106Z" level=info msg="CreateContainer within sandbox \"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 04:49:33.001443 containerd[1545]: time="2025-09-10T04:49:33.001403415Z" level=info msg="Container e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:33.004270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939199202.mount: Deactivated successfully. Sep 10 04:49:33.008082 containerd[1545]: time="2025-09-10T04:49:33.008038937Z" level=info msg="CreateContainer within sandbox \"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5\"" Sep 10 04:49:33.008565 containerd[1545]: time="2025-09-10T04:49:33.008505163Z" level=info msg="StartContainer for \"e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5\"" Sep 10 04:49:33.009680 containerd[1545]: time="2025-09-10T04:49:33.009653289Z" level=info msg="connecting to shim e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5" address="unix:///run/containerd/s/3d349fea4ff1a115b2a124825f0024d1ebf742e0a29f0e204235ac07170c9878" protocol=ttrpc version=3 Sep 10 04:49:33.029709 systemd[1]: Started cri-containerd-e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5.scope - libcontainer container e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5. Sep 10 04:49:33.066735 containerd[1545]: time="2025-09-10T04:49:33.066597274Z" level=info msg="StartContainer for \"e8c966ed24938c7c0cc881484ad408fc78abe5ea5e7cb50b7a0971f169843fc5\" returns successfully" Sep 10 04:49:33.072615 containerd[1545]: time="2025-09-10T04:49:33.072102390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 04:49:33.445772 systemd-networkd[1438]: vxlan.calico: Link UP Sep 10 04:49:33.447051 systemd-networkd[1438]: vxlan.calico: Gained carrier Sep 10 04:49:33.687385 systemd-networkd[1438]: cali3b4991551d7: Gained IPv6LL Sep 10 04:49:34.351313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847022387.mount: Deactivated successfully. Sep 10 04:49:34.378604 containerd[1545]: time="2025-09-10T04:49:34.378565447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:34.379145 containerd[1545]: time="2025-09-10T04:49:34.379115631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 04:49:34.380023 containerd[1545]: time="2025-09-10T04:49:34.379978567Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:34.382573 containerd[1545]: time="2025-09-10T04:49:34.382493057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:34.383326 containerd[1545]: time="2025-09-10T04:49:34.383210877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.310432747s" Sep 10 04:49:34.383326 containerd[1545]: time="2025-09-10T04:49:34.383242396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 04:49:34.386737 containerd[1545]: time="2025-09-10T04:49:34.386707780Z" level=info msg="CreateContainer within sandbox \"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 04:49:34.394489 containerd[1545]: time="2025-09-10T04:49:34.394451204Z" level=info msg="Container 05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:34.398478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230830015.mount: Deactivated successfully. Sep 10 04:49:34.402296 containerd[1545]: time="2025-09-10T04:49:34.402248306Z" level=info msg="CreateContainer within sandbox \"22cdb3a0d7cf9c873ccf6070726b66e681211810d2ea486716485a875ce20def\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1\"" Sep 10 04:49:34.402989 containerd[1545]: time="2025-09-10T04:49:34.402966686Z" level=info msg="StartContainer for \"05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1\"" Sep 10 04:49:34.404295 containerd[1545]: time="2025-09-10T04:49:34.404233571Z" level=info msg="connecting to shim 05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1" address="unix:///run/containerd/s/3d349fea4ff1a115b2a124825f0024d1ebf742e0a29f0e204235ac07170c9878" protocol=ttrpc version=3 Sep 10 04:49:34.427681 systemd[1]: Started cri-containerd-05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1.scope - libcontainer container 05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1. Sep 10 04:49:34.463863 containerd[1545]: time="2025-09-10T04:49:34.463770189Z" level=info msg="StartContainer for \"05fde29f97d64043fc9ffede6ba574508993c41fd80cebaacfa57539b29f72a1\" returns successfully" Sep 10 04:49:35.157691 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Sep 10 04:49:35.405013 kubelet[2659]: I0910 04:49:35.404945 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d4866c8df-grvdx" podStartSLOduration=2.177602836 podStartE2EDuration="4.40492842s" podCreationTimestamp="2025-09-10 04:49:31 +0000 UTC" firstStartedPulling="2025-09-10 04:49:32.157034261 +0000 UTC m=+31.962985462" lastFinishedPulling="2025-09-10 04:49:34.384359845 +0000 UTC m=+34.190311046" observedRunningTime="2025-09-10 04:49:35.40452391 +0000 UTC m=+35.210475111" watchObservedRunningTime="2025-09-10 04:49:35.40492842 +0000 UTC m=+35.210879621" Sep 10 04:49:38.265057 containerd[1545]: time="2025-09-10T04:49:38.264957634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xtvmh,Uid:8e7146a5-9da1-4739-9d4e-3bdfaefcc470,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:38.366252 systemd-networkd[1438]: cali89654e86ac1: Link UP Sep 10 04:49:38.366587 systemd-networkd[1438]: cali89654e86ac1: Gained carrier Sep 10 04:49:38.381270 containerd[1545]: 2025-09-10 04:49:38.304 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--xtvmh-eth0 goldmane-7988f88666- calico-system 8e7146a5-9da1-4739-9d4e-3bdfaefcc470 821 0 2025-09-10 04:49:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-xtvmh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali89654e86ac1 [] [] }} ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-" Sep 10 04:49:38.381270 containerd[1545]: 2025-09-10 04:49:38.304 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.381270 containerd[1545]: 2025-09-10 04:49:38.326 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" HandleID="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Workload="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.326 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" HandleID="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Workload="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-xtvmh", "timestamp":"2025-09-10 04:49:38.326804145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.326 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.327 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.327 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.336 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" host="localhost" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.340 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.344 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.346 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.348 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:38.382328 containerd[1545]: 2025-09-10 04:49:38.348 [INFO][4200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" host="localhost" Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.350 [INFO][4200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577 Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.354 [INFO][4200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" host="localhost" Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.359 [INFO][4200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" host="localhost" Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.359 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" host="localhost" Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.360 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:38.382872 containerd[1545]: 2025-09-10 04:49:38.360 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" HandleID="k8s-pod-network.ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Workload="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.383010 containerd[1545]: 2025-09-10 04:49:38.364 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--xtvmh-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8e7146a5-9da1-4739-9d4e-3bdfaefcc470", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-xtvmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali89654e86ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:38.383010 containerd[1545]: 2025-09-10 04:49:38.364 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.383083 containerd[1545]: 2025-09-10 04:49:38.364 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89654e86ac1 ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.383083 containerd[1545]: 2025-09-10 04:49:38.366 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.383134 containerd[1545]: 2025-09-10 04:49:38.367 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--xtvmh-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8e7146a5-9da1-4739-9d4e-3bdfaefcc470", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577", Pod:"goldmane-7988f88666-xtvmh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali89654e86ac1", MAC:"9a:82:88:9e:c4:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:38.383225 containerd[1545]: 2025-09-10 04:49:38.378 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" Namespace="calico-system" Pod="goldmane-7988f88666-xtvmh" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--xtvmh-eth0" Sep 10 04:49:38.406034 containerd[1545]: time="2025-09-10T04:49:38.405988174Z" level=info msg="connecting to shim ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577" address="unix:///run/containerd/s/182e8d1bae510a68951bc17068537a07877621ff64dd516607fb6404fb7ba8b2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:38.436748 systemd[1]: Started cri-containerd-ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577.scope - libcontainer container ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577. Sep 10 04:49:38.456933 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:38.479447 containerd[1545]: time="2025-09-10T04:49:38.479412270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-xtvmh,Uid:8e7146a5-9da1-4739-9d4e-3bdfaefcc470,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577\"" Sep 10 04:49:38.481131 containerd[1545]: time="2025-09-10T04:49:38.481051622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 04:49:39.112796 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:47828.service - OpenSSH per-connection server daemon (10.0.0.1:47828). Sep 10 04:49:39.181124 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 47828 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:39.182607 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:39.186206 systemd-logind[1516]: New session 8 of user core. Sep 10 04:49:39.209701 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 04:49:39.264807 containerd[1545]: time="2025-09-10T04:49:39.264751226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5ft4d,Uid:f1088372-b201-40dc-b560-c91286a6ca6f,Namespace:kube-system,Attempt:0,}" Sep 10 04:49:39.264807 containerd[1545]: time="2025-09-10T04:49:39.264771226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-ddd7v,Uid:bacb79ca-f68c-4697-83fe-0ef469ca760a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:49:39.264957 containerd[1545]: time="2025-09-10T04:49:39.264822146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zqbm,Uid:25f6ec00-763a-4872-8e56-095e0486dfa0,Namespace:kube-system,Attempt:0,}" Sep 10 04:49:39.522944 systemd-networkd[1438]: cali95ad7710966: Link UP Sep 10 04:49:39.523526 systemd-networkd[1438]: cali95ad7710966: Gained carrier Sep 10 04:49:39.549239 containerd[1545]: 2025-09-10 04:49:39.353 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0 coredns-7c65d6cfc9- kube-system f1088372-b201-40dc-b560-c91286a6ca6f 817 0 2025-09-10 04:49:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-5ft4d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95ad7710966 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-" Sep 10 04:49:39.549239 containerd[1545]: 2025-09-10 04:49:39.356 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.549239 containerd[1545]: 2025-09-10 04:49:39.418 [INFO][4338] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" HandleID="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Workload="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4338] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" HandleID="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Workload="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-5ft4d", "timestamp":"2025-09-10 04:49:39.418825685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.442 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" host="localhost" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.456 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.460 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.462 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.465 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.550517 containerd[1545]: 2025-09-10 04:49:39.465 [INFO][4338] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" host="localhost" Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.493 [INFO][4338] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298 Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.500 [INFO][4338] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" host="localhost" Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.515 [INFO][4338] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" host="localhost" Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.515 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" host="localhost" Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.515 [INFO][4338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:39.550785 containerd[1545]: 2025-09-10 04:49:39.515 [INFO][4338] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" HandleID="k8s-pod-network.d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Workload="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.550903 containerd[1545]: 2025-09-10 04:49:39.519 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1088372-b201-40dc-b560-c91286a6ca6f", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-5ft4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95ad7710966", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.550963 containerd[1545]: 2025-09-10 04:49:39.520 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.550963 containerd[1545]: 2025-09-10 04:49:39.520 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95ad7710966 ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.550963 containerd[1545]: 2025-09-10 04:49:39.522 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.551024 containerd[1545]: 2025-09-10 04:49:39.523 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1088372-b201-40dc-b560-c91286a6ca6f", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298", Pod:"coredns-7c65d6cfc9-5ft4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95ad7710966", MAC:"0e:e5:12:64:be:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.551024 containerd[1545]: 2025-09-10 04:49:39.539 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5ft4d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5ft4d-eth0" Sep 10 04:49:39.559486 sshd[4280]: Connection closed by 10.0.0.1 port 47828 Sep 10 04:49:39.559730 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:39.565501 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:47828.service: Deactivated successfully. Sep 10 04:49:39.569368 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 04:49:39.571261 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Sep 10 04:49:39.572982 systemd-logind[1516]: Removed session 8. Sep 10 04:49:39.590576 containerd[1545]: time="2025-09-10T04:49:39.589723307Z" level=info msg="connecting to shim d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298" address="unix:///run/containerd/s/2f772e47feb79fdfa120ed193e37fca35b401e1d09662417c2214772e4bbf1a9" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:39.598909 systemd-networkd[1438]: cali35ebaaa4259: Link UP Sep 10 04:49:39.599342 systemd-networkd[1438]: cali35ebaaa4259: Gained carrier Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.335 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0 calico-apiserver-784d6d7b7f- calico-apiserver bacb79ca-f68c-4697-83fe-0ef469ca760a 822 0 2025-09-10 04:49:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784d6d7b7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-784d6d7b7f-ddd7v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35ebaaa4259 [] [] }} ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.335 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" HandleID="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" HandleID="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b95f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-784d6d7b7f-ddd7v", "timestamp":"2025-09-10 04:49:39.419238363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.419 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.515 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.516 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.538 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.554 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.561 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.563 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.566 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.566 [INFO][4332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.569 [INFO][4332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028 Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.574 [INFO][4332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.584 [INFO][4332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.584 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" host="localhost" Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.584 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:39.614412 containerd[1545]: 2025-09-10 04:49:39.585 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" HandleID="k8s-pod-network.e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.594 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0", GenerateName:"calico-apiserver-784d6d7b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bacb79ca-f68c-4697-83fe-0ef469ca760a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784d6d7b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-784d6d7b7f-ddd7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35ebaaa4259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.594 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.595 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35ebaaa4259 ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.600 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.601 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0", GenerateName:"calico-apiserver-784d6d7b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bacb79ca-f68c-4697-83fe-0ef469ca760a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784d6d7b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028", Pod:"calico-apiserver-784d6d7b7f-ddd7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35ebaaa4259", MAC:"d6:6d:20:60:c6:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.615251 containerd[1545]: 2025-09-10 04:49:39.611 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-ddd7v" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--ddd7v-eth0" Sep 10 04:49:39.619008 systemd[1]: Started cri-containerd-d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298.scope - libcontainer container d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298. Sep 10 04:49:39.637069 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:39.638335 containerd[1545]: time="2025-09-10T04:49:39.638280845Z" level=info msg="connecting to shim e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028" address="unix:///run/containerd/s/fe434224fa10a7a64e58d357bd02da36c8c6aa4f569ec89338b3b8b562b62389" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:39.684717 systemd[1]: Started cri-containerd-e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028.scope - libcontainer container e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028. Sep 10 04:49:39.691866 containerd[1545]: time="2025-09-10T04:49:39.691664602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5ft4d,Uid:f1088372-b201-40dc-b560-c91286a6ca6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298\"" Sep 10 04:49:39.696094 containerd[1545]: time="2025-09-10T04:49:39.696063422Z" level=info msg="CreateContainer within sandbox \"d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 04:49:39.702220 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:39.703190 systemd-networkd[1438]: calia5b74eca969: Link UP Sep 10 04:49:39.704126 systemd-networkd[1438]: calia5b74eca969: Gained carrier Sep 10 04:49:39.706224 containerd[1545]: time="2025-09-10T04:49:39.706191536Z" level=info msg="Container a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.353 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0 coredns-7c65d6cfc9- kube-system 25f6ec00-763a-4872-8e56-095e0486dfa0 819 0 2025-09-10 04:49:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-2zqbm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5b74eca969 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.356 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.441 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" HandleID="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Workload="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.441 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" HandleID="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Workload="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-2zqbm", "timestamp":"2025-09-10 04:49:39.441528981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.441 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.585 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.585 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.635 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.653 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.662 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.665 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.668 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.668 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.669 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.678 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.687 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.687 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" host="localhost" Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.687 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:39.720280 containerd[1545]: 2025-09-10 04:49:39.687 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" HandleID="k8s-pod-network.fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Workload="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.694 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"25f6ec00-763a-4872-8e56-095e0486dfa0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-2zqbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b74eca969", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.695 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.695 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5b74eca969 ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.705 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.706 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"25f6ec00-763a-4872-8e56-095e0486dfa0", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b", Pod:"coredns-7c65d6cfc9-2zqbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5b74eca969", MAC:"fa:03:b6:9f:2c:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:39.721516 containerd[1545]: 2025-09-10 04:49:39.716 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2zqbm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--2zqbm-eth0" Sep 10 04:49:39.722702 containerd[1545]: time="2025-09-10T04:49:39.721167988Z" level=info msg="CreateContainer within sandbox \"d43f35af8ff4d6b3d43b09e998d85a64cb26b8cd91ed490a2cf40e8ffe49a298\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1\"" Sep 10 04:49:39.723711 containerd[1545]: time="2025-09-10T04:49:39.723686216Z" level=info msg="StartContainer for \"a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1\"" Sep 10 04:49:39.724531 containerd[1545]: time="2025-09-10T04:49:39.724499373Z" level=info msg="connecting to shim a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1" address="unix:///run/containerd/s/2f772e47feb79fdfa120ed193e37fca35b401e1d09662417c2214772e4bbf1a9" protocol=ttrpc version=3 Sep 10 04:49:39.751054 containerd[1545]: time="2025-09-10T04:49:39.751010692Z" level=info msg="connecting to shim fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b" address="unix:///run/containerd/s/f3d0d65eae5b4241971ac88e3ded7e9744008daba16245a8202a3dd7f095f075" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:39.759585 systemd[1]: Started cri-containerd-a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1.scope - libcontainer container a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1. Sep 10 04:49:39.779701 containerd[1545]: time="2025-09-10T04:49:39.779510962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-ddd7v,Uid:bacb79ca-f68c-4697-83fe-0ef469ca760a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028\"" Sep 10 04:49:39.783319 systemd[1]: Started cri-containerd-fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b.scope - libcontainer container fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b. Sep 10 04:49:39.798138 containerd[1545]: time="2025-09-10T04:49:39.798099238Z" level=info msg="StartContainer for \"a49cd0ca5251a3829fdea7a95dcc0ddc61ae3e55833a459aa20a8dbae1a3c1f1\" returns successfully" Sep 10 04:49:39.803471 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:39.842823 containerd[1545]: time="2025-09-10T04:49:39.842782634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2zqbm,Uid:25f6ec00-763a-4872-8e56-095e0486dfa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b\"" Sep 10 04:49:39.847334 containerd[1545]: time="2025-09-10T04:49:39.847228814Z" level=info msg="CreateContainer within sandbox \"fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 04:49:39.860286 containerd[1545]: time="2025-09-10T04:49:39.860246835Z" level=info msg="Container 9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:39.867691 containerd[1545]: time="2025-09-10T04:49:39.867658801Z" level=info msg="CreateContainer within sandbox \"fee5fac5a3d0701f402c47e8fe9aa30f30066875a79f214ca699b435ff5e2d5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49\"" Sep 10 04:49:39.868685 containerd[1545]: time="2025-09-10T04:49:39.868655796Z" level=info msg="StartContainer for \"9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49\"" Sep 10 04:49:39.871020 containerd[1545]: time="2025-09-10T04:49:39.870766587Z" level=info msg="connecting to shim 9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49" address="unix:///run/containerd/s/f3d0d65eae5b4241971ac88e3ded7e9744008daba16245a8202a3dd7f095f075" protocol=ttrpc version=3 Sep 10 04:49:39.897705 systemd[1]: Started cri-containerd-9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49.scope - libcontainer container 9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49. Sep 10 04:49:39.947905 containerd[1545]: time="2025-09-10T04:49:39.947849196Z" level=info msg="StartContainer for \"9bc0e1a7f302cee9e2208be2e72f79fdb4bc4e4071bfcf453e59b2067f3f0a49\" returns successfully" Sep 10 04:49:40.266371 containerd[1545]: time="2025-09-10T04:49:40.266314179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t2bk,Uid:b2a6a745-401d-495a-b87c-90f62d69bda1,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:40.277286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348476849.mount: Deactivated successfully. Sep 10 04:49:40.341742 systemd-networkd[1438]: cali89654e86ac1: Gained IPv6LL Sep 10 04:49:40.400426 systemd-networkd[1438]: cali9321a4a6f31: Link UP Sep 10 04:49:40.401049 systemd-networkd[1438]: cali9321a4a6f31: Gained carrier Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.315 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4t2bk-eth0 csi-node-driver- calico-system b2a6a745-401d-495a-b87c-90f62d69bda1 720 0 2025-09-10 04:49:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4t2bk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9321a4a6f31 [] [] }} ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.315 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.344 [INFO][4619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" HandleID="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Workload="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.344 [INFO][4619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" HandleID="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Workload="localhost-k8s-csi--node--driver--4t2bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042d380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4t2bk", "timestamp":"2025-09-10 04:49:40.344814631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.344 [INFO][4619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.345 [INFO][4619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.345 [INFO][4619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.358 [INFO][4619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.367 [INFO][4619] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.372 [INFO][4619] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.376 [INFO][4619] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.378 [INFO][4619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.379 [INFO][4619] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.380 [INFO][4619] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.384 [INFO][4619] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.390 [INFO][4619] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.390 [INFO][4619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" host="localhost" Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.391 [INFO][4619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:40.425870 containerd[1545]: 2025-09-10 04:49:40.391 [INFO][4619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" HandleID="k8s-pod-network.537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Workload="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.396 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4t2bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2a6a745-401d-495a-b87c-90f62d69bda1", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4t2bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9321a4a6f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.396 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.396 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9321a4a6f31 ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.401 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.403 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4t2bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b2a6a745-401d-495a-b87c-90f62d69bda1", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd", Pod:"csi-node-driver-4t2bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9321a4a6f31", MAC:"32:03:2a:a7:af:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:40.426486 containerd[1545]: 2025-09-10 04:49:40.413 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" Namespace="calico-system" Pod="csi-node-driver-4t2bk" WorkloadEndpoint="localhost-k8s-csi--node--driver--4t2bk-eth0" Sep 10 04:49:40.437393 kubelet[2659]: I0910 04:49:40.437005 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5ft4d" podStartSLOduration=33.436981423 podStartE2EDuration="33.436981423s" podCreationTimestamp="2025-09-10 04:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:40.435759508 +0000 UTC m=+40.241710709" watchObservedRunningTime="2025-09-10 04:49:40.436981423 +0000 UTC m=+40.242932624" Sep 10 04:49:40.456812 kubelet[2659]: I0910 04:49:40.456520 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2zqbm" podStartSLOduration=33.456499336 podStartE2EDuration="33.456499336s" podCreationTimestamp="2025-09-10 04:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:40.455284502 +0000 UTC m=+40.261235703" watchObservedRunningTime="2025-09-10 04:49:40.456499336 +0000 UTC m=+40.262450537" Sep 10 04:49:40.539095 containerd[1545]: time="2025-09-10T04:49:40.538965971Z" level=info msg="connecting to shim 537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd" address="unix:///run/containerd/s/a30bb24da0f15f560cf3e91aa3c047f2696d65cf0c5ff8e6ace3e258c5ba23ed" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:40.540816 containerd[1545]: time="2025-09-10T04:49:40.540129406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:40.542653 containerd[1545]: time="2025-09-10T04:49:40.542609875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 04:49:40.543668 containerd[1545]: time="2025-09-10T04:49:40.543623911Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:40.546251 containerd[1545]: time="2025-09-10T04:49:40.546020100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:40.547560 containerd[1545]: time="2025-09-10T04:49:40.547104335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.066020153s" Sep 10 04:49:40.551690 containerd[1545]: time="2025-09-10T04:49:40.551585035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 04:49:40.558354 containerd[1545]: time="2025-09-10T04:49:40.558327525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 04:49:40.559628 containerd[1545]: time="2025-09-10T04:49:40.559522120Z" level=info msg="CreateContainer within sandbox \"ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 04:49:40.566987 containerd[1545]: time="2025-09-10T04:49:40.566946327Z" level=info msg="Container 0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:40.571709 systemd[1]: Started cri-containerd-537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd.scope - libcontainer container 537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd. Sep 10 04:49:40.574162 containerd[1545]: time="2025-09-10T04:49:40.574058776Z" level=info msg="CreateContainer within sandbox \"ebfc6b5b6a383dcc120e67a39d1d6a09fa0ae876f026d3a63dc01cf3138cd577\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\"" Sep 10 04:49:40.575036 containerd[1545]: time="2025-09-10T04:49:40.574949532Z" level=info msg="StartContainer for \"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\"" Sep 10 04:49:40.577872 containerd[1545]: time="2025-09-10T04:49:40.577826039Z" level=info msg="connecting to shim 0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f" address="unix:///run/containerd/s/182e8d1bae510a68951bc17068537a07877621ff64dd516607fb6404fb7ba8b2" protocol=ttrpc version=3 Sep 10 04:49:40.588826 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:40.597769 systemd[1]: Started cri-containerd-0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f.scope - libcontainer container 0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f. Sep 10 04:49:40.606922 containerd[1545]: time="2025-09-10T04:49:40.606886270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4t2bk,Uid:b2a6a745-401d-495a-b87c-90f62d69bda1,Namespace:calico-system,Attempt:0,} returns sandbox id \"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd\"" Sep 10 04:49:40.633670 containerd[1545]: time="2025-09-10T04:49:40.633634472Z" level=info msg="StartContainer for \"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\" returns successfully" Sep 10 04:49:40.661699 systemd-networkd[1438]: cali35ebaaa4259: Gained IPv6LL Sep 10 04:49:41.173781 systemd-networkd[1438]: calia5b74eca969: Gained IPv6LL Sep 10 04:49:41.266100 containerd[1545]: time="2025-09-10T04:49:41.266043183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675f4f5db4-jgx46,Uid:2ec4b9f5-5e58-4c69-8de9-b5996a60edb4,Namespace:calico-system,Attempt:0,}" Sep 10 04:49:41.404523 systemd-networkd[1438]: calid4a45921eb7: Link UP Sep 10 04:49:41.404986 systemd-networkd[1438]: calid4a45921eb7: Gained carrier Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.309 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0 calico-kube-controllers-675f4f5db4- calico-system 2ec4b9f5-5e58-4c69-8de9-b5996a60edb4 818 0 2025-09-10 04:49:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:675f4f5db4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-675f4f5db4-jgx46 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid4a45921eb7 [] [] }} ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.309 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.338 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" HandleID="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Workload="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.338 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" HandleID="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Workload="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050fe40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-675f4f5db4-jgx46", "timestamp":"2025-09-10 04:49:41.338383631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.338 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.338 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.338 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.362 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.371 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.378 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.381 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.386 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.386 [INFO][4743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.388 [INFO][4743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168 Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.392 [INFO][4743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.399 [INFO][4743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.399 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" host="localhost" Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.399 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:41.422350 containerd[1545]: 2025-09-10 04:49:41.399 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" HandleID="k8s-pod-network.2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Workload="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.401 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0", GenerateName:"calico-kube-controllers-675f4f5db4-", Namespace:"calico-system", SelfLink:"", UID:"2ec4b9f5-5e58-4c69-8de9-b5996a60edb4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675f4f5db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-675f4f5db4-jgx46", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4a45921eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.401 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.401 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4a45921eb7 ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.405 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.408 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0", GenerateName:"calico-kube-controllers-675f4f5db4-", Namespace:"calico-system", SelfLink:"", UID:"2ec4b9f5-5e58-4c69-8de9-b5996a60edb4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"675f4f5db4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168", Pod:"calico-kube-controllers-675f4f5db4-jgx46", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4a45921eb7", MAC:"66:32:82:f5:8a:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:41.423364 containerd[1545]: 2025-09-10 04:49:41.418 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" Namespace="calico-system" Pod="calico-kube-controllers-675f4f5db4-jgx46" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--675f4f5db4--jgx46-eth0" Sep 10 04:49:41.429662 systemd-networkd[1438]: cali95ad7710966: Gained IPv6LL Sep 10 04:49:41.457142 kubelet[2659]: I0910 04:49:41.457085 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-xtvmh" podStartSLOduration=19.379504898 podStartE2EDuration="21.45706828s" podCreationTimestamp="2025-09-10 04:49:20 +0000 UTC" firstStartedPulling="2025-09-10 04:49:38.480615584 +0000 UTC m=+38.286566745" lastFinishedPulling="2025-09-10 04:49:40.558178926 +0000 UTC m=+40.364130127" observedRunningTime="2025-09-10 04:49:41.456669482 +0000 UTC m=+41.262620683" watchObservedRunningTime="2025-09-10 04:49:41.45706828 +0000 UTC m=+41.263019441" Sep 10 04:49:41.481758 containerd[1545]: time="2025-09-10T04:49:41.481713414Z" level=info msg="connecting to shim 2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168" address="unix:///run/containerd/s/dc70f08605e7211b678d80cfacf9f430c059b9120345833423019047a299cb6e" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:41.503737 systemd[1]: Started cri-containerd-2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168.scope - libcontainer container 2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168. Sep 10 04:49:41.519280 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:41.543555 containerd[1545]: time="2025-09-10T04:49:41.543412708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-675f4f5db4-jgx46,Uid:2ec4b9f5-5e58-4c69-8de9-b5996a60edb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168\"" Sep 10 04:49:41.639880 containerd[1545]: time="2025-09-10T04:49:41.639832813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\" id:\"ebb77d45016869653a8c4a55362edfa6f8474c77eb8aba04f138121c2ab52b3f\" pid:4825 exit_status:1 exited_at:{seconds:1757479781 nanos:633234881}" Sep 10 04:49:42.033945 containerd[1545]: time="2025-09-10T04:49:42.033903439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.034934 containerd[1545]: time="2025-09-10T04:49:42.034754516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 04:49:42.035758 containerd[1545]: time="2025-09-10T04:49:42.035717512Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.037860 containerd[1545]: time="2025-09-10T04:49:42.037830463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.038965 containerd[1545]: time="2025-09-10T04:49:42.038935698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.480572933s" Sep 10 04:49:42.039048 containerd[1545]: time="2025-09-10T04:49:42.038968338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 04:49:42.039955 containerd[1545]: time="2025-09-10T04:49:42.039930094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 04:49:42.041275 containerd[1545]: time="2025-09-10T04:49:42.041245689Z" level=info msg="CreateContainer within sandbox \"e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 04:49:42.046928 containerd[1545]: time="2025-09-10T04:49:42.046898945Z" level=info msg="Container 15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:42.055038 containerd[1545]: time="2025-09-10T04:49:42.054922591Z" level=info msg="CreateContainer within sandbox \"e4ead50238ccf56809b8572441e89e53e275ddd2c01ebbd5e09c413e8f9bb028\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee\"" Sep 10 04:49:42.056725 containerd[1545]: time="2025-09-10T04:49:42.056693264Z" level=info msg="StartContainer for \"15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee\"" Sep 10 04:49:42.058021 containerd[1545]: time="2025-09-10T04:49:42.057954979Z" level=info msg="connecting to shim 15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee" address="unix:///run/containerd/s/fe434224fa10a7a64e58d357bd02da36c8c6aa4f569ec89338b3b8b562b62389" protocol=ttrpc version=3 Sep 10 04:49:42.079689 systemd[1]: Started cri-containerd-15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee.scope - libcontainer container 15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee. Sep 10 04:49:42.112771 containerd[1545]: time="2025-09-10T04:49:42.112733949Z" level=info msg="StartContainer for \"15f5b1452a6cb8dc53c859d56c6e3a18d4c9c210fabf2eb13d7443089b7ca7ee\" returns successfully" Sep 10 04:49:42.197709 systemd-networkd[1438]: cali9321a4a6f31: Gained IPv6LL Sep 10 04:49:42.264764 containerd[1545]: time="2025-09-10T04:49:42.264722832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-gqp8n,Uid:834e4678-ed3e-4f5a-8280-030995d342c6,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:49:42.372661 systemd-networkd[1438]: calie89307a983d: Link UP Sep 10 04:49:42.373182 systemd-networkd[1438]: calie89307a983d: Gained carrier Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.306 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0 calico-apiserver-784d6d7b7f- calico-apiserver 834e4678-ed3e-4f5a-8280-030995d342c6 820 0 2025-09-10 04:49:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:784d6d7b7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-784d6d7b7f-gqp8n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie89307a983d [] [] }} ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.307 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.329 [INFO][4899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" HandleID="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.329 [INFO][4899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" HandleID="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001af4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-784d6d7b7f-gqp8n", "timestamp":"2025-09-10 04:49:42.32979512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.330 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.330 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.330 [INFO][4899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.339 [INFO][4899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.346 [INFO][4899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.351 [INFO][4899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.353 [INFO][4899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.355 [INFO][4899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.355 [INFO][4899] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.357 [INFO][4899] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554 Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.361 [INFO][4899] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.368 [INFO][4899] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.368 [INFO][4899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" host="localhost" Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.368 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:49:42.390638 containerd[1545]: 2025-09-10 04:49:42.368 [INFO][4899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" HandleID="k8s-pod-network.0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Workload="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.370 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0", GenerateName:"calico-apiserver-784d6d7b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"834e4678-ed3e-4f5a-8280-030995d342c6", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784d6d7b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-784d6d7b7f-gqp8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie89307a983d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.370 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.370 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie89307a983d ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.373 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.373 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0", GenerateName:"calico-apiserver-784d6d7b7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"834e4678-ed3e-4f5a-8280-030995d342c6", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"784d6d7b7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554", Pod:"calico-apiserver-784d6d7b7f-gqp8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie89307a983d", MAC:"56:91:aa:84:28:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:49:42.391121 containerd[1545]: 2025-09-10 04:49:42.388 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" Namespace="calico-apiserver" Pod="calico-apiserver-784d6d7b7f-gqp8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--784d6d7b7f--gqp8n-eth0" Sep 10 04:49:42.415178 containerd[1545]: time="2025-09-10T04:49:42.414668644Z" level=info msg="connecting to shim 0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554" address="unix:///run/containerd/s/a585e305d17c69413456a3251a7f2503f04d3272c3249817a10d62703d019d17" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:49:42.433903 systemd[1]: Started cri-containerd-0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554.scope - libcontainer container 0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554. Sep 10 04:49:42.451681 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:49:42.457242 kubelet[2659]: I0910 04:49:42.457181 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784d6d7b7f-ddd7v" podStartSLOduration=25.199152324 podStartE2EDuration="27.457162906s" podCreationTimestamp="2025-09-10 04:49:15 +0000 UTC" firstStartedPulling="2025-09-10 04:49:39.781616033 +0000 UTC m=+39.587567234" lastFinishedPulling="2025-09-10 04:49:42.039626615 +0000 UTC m=+41.845577816" observedRunningTime="2025-09-10 04:49:42.456797947 +0000 UTC m=+42.262749148" watchObservedRunningTime="2025-09-10 04:49:42.457162906 +0000 UTC m=+42.263114067" Sep 10 04:49:42.496270 containerd[1545]: time="2025-09-10T04:49:42.496228062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-784d6d7b7f-gqp8n,Uid:834e4678-ed3e-4f5a-8280-030995d342c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554\"" Sep 10 04:49:42.498987 containerd[1545]: time="2025-09-10T04:49:42.498957571Z" level=info msg="CreateContainer within sandbox \"0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 04:49:42.505702 containerd[1545]: time="2025-09-10T04:49:42.505666183Z" level=info msg="Container 097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:42.518300 containerd[1545]: time="2025-09-10T04:49:42.518181690Z" level=info msg="CreateContainer within sandbox \"0950c62a21a71b71648f920c9a83106baa94997247487204e49c01fb4671a554\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045\"" Sep 10 04:49:42.519139 containerd[1545]: time="2025-09-10T04:49:42.519114166Z" level=info msg="StartContainer for \"097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045\"" Sep 10 04:49:42.520276 containerd[1545]: time="2025-09-10T04:49:42.520251882Z" level=info msg="connecting to shim 097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045" address="unix:///run/containerd/s/a585e305d17c69413456a3251a7f2503f04d3272c3249817a10d62703d019d17" protocol=ttrpc version=3 Sep 10 04:49:42.536100 containerd[1545]: time="2025-09-10T04:49:42.535999256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\" id:\"ae75b302454e75766bdf53807f9b6894888ec65326acbcdc82d619068bf95dc0\" pid:4966 exit_status:1 exited_at:{seconds:1757479782 nanos:535684697}" Sep 10 04:49:42.542716 systemd[1]: Started cri-containerd-097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045.scope - libcontainer container 097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045. Sep 10 04:49:42.584478 containerd[1545]: time="2025-09-10T04:49:42.584433253Z" level=info msg="StartContainer for \"097672df207c558171d19eac37ec74687de310420319dd7a1893a7e714af3045\" returns successfully" Sep 10 04:49:42.924563 containerd[1545]: time="2025-09-10T04:49:42.924500268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.925422 containerd[1545]: time="2025-09-10T04:49:42.925155865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 04:49:42.927233 containerd[1545]: time="2025-09-10T04:49:42.927199736Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.929526 containerd[1545]: time="2025-09-10T04:49:42.929489847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:42.930726 containerd[1545]: time="2025-09-10T04:49:42.930690762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 890.728588ms" Sep 10 04:49:42.930761 containerd[1545]: time="2025-09-10T04:49:42.930726122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 04:49:42.932155 containerd[1545]: time="2025-09-10T04:49:42.932124076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 04:49:42.933703 containerd[1545]: time="2025-09-10T04:49:42.933670509Z" level=info msg="CreateContainer within sandbox \"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 04:49:42.950235 containerd[1545]: time="2025-09-10T04:49:42.950203720Z" level=info msg="Container 0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:42.967352 containerd[1545]: time="2025-09-10T04:49:42.967315008Z" level=info msg="CreateContainer within sandbox \"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574\"" Sep 10 04:49:42.968092 containerd[1545]: time="2025-09-10T04:49:42.967966846Z" level=info msg="StartContainer for \"0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574\"" Sep 10 04:49:42.969448 containerd[1545]: time="2025-09-10T04:49:42.969414080Z" level=info msg="connecting to shim 0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574" address="unix:///run/containerd/s/a30bb24da0f15f560cf3e91aa3c047f2696d65cf0c5ff8e6ace3e258c5ba23ed" protocol=ttrpc version=3 Sep 10 04:49:42.990698 systemd[1]: Started cri-containerd-0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574.scope - libcontainer container 0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574. Sep 10 04:49:43.030721 containerd[1545]: time="2025-09-10T04:49:43.030679666Z" level=info msg="StartContainer for \"0d514814a7ccc9bb6db8577931d6aaef35dff6535d6a032eabe899cb62251574\" returns successfully" Sep 10 04:49:43.285649 systemd-networkd[1438]: calid4a45921eb7: Gained IPv6LL Sep 10 04:49:43.358439 containerd[1545]: time="2025-09-10T04:49:43.358387650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\" id:\"59fa65885fd1c8d09ece84dd98853253ec2dbf8885ee4a2b58490d9fa202c78f\" pid:5068 exited_at:{seconds:1757479783 nanos:357702773}" Sep 10 04:49:43.448052 kubelet[2659]: I0910 04:49:43.448007 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:43.456808 kubelet[2659]: I0910 04:49:43.456754 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-784d6d7b7f-gqp8n" podStartSLOduration=28.45673709 podStartE2EDuration="28.45673709s" podCreationTimestamp="2025-09-10 04:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:49:43.456247292 +0000 UTC m=+43.262198493" watchObservedRunningTime="2025-09-10 04:49:43.45673709 +0000 UTC m=+43.262688291" Sep 10 04:49:44.117784 systemd-networkd[1438]: calie89307a983d: Gained IPv6LL Sep 10 04:49:44.452341 kubelet[2659]: I0910 04:49:44.452300 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:44.573895 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:34742.service - OpenSSH per-connection server daemon (10.0.0.1:34742). Sep 10 04:49:44.658260 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 34742 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:44.660807 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:44.666950 systemd-logind[1516]: New session 9 of user core. Sep 10 04:49:44.676733 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 04:49:44.737558 kubelet[2659]: I0910 04:49:44.737350 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:49:44.920457 containerd[1545]: time="2025-09-10T04:49:44.920412664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\" id:\"d59a4d9e5d888282828655274e72489d37b9afdf7b236a8e049122e5cdbace00\" pid:5114 exited_at:{seconds:1757479784 nanos:919844707}" Sep 10 04:49:45.016760 sshd[5097]: Connection closed by 10.0.0.1 port 34742 Sep 10 04:49:45.016459 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:45.021999 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:34742.service: Deactivated successfully. Sep 10 04:49:45.025160 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 04:49:45.026360 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Sep 10 04:49:45.029471 systemd-logind[1516]: Removed session 9. Sep 10 04:49:45.069231 containerd[1545]: time="2025-09-10T04:49:45.069188042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\" id:\"cadb64eab77a8efe19af1811c82aeda6b83e41f124cc2b138b75b85c2a54da54\" pid:5145 exited_at:{seconds:1757479785 nanos:68939923}" Sep 10 04:49:45.359708 containerd[1545]: time="2025-09-10T04:49:45.359599561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:45.360592 containerd[1545]: time="2025-09-10T04:49:45.360407678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 04:49:45.361458 containerd[1545]: time="2025-09-10T04:49:45.361428834Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:45.363290 containerd[1545]: time="2025-09-10T04:49:45.363251907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:45.364048 containerd[1545]: time="2025-09-10T04:49:45.364020304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.431862348s" Sep 10 04:49:45.364088 containerd[1545]: time="2025-09-10T04:49:45.364053744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 04:49:45.365628 containerd[1545]: time="2025-09-10T04:49:45.365525938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 04:49:45.375159 containerd[1545]: time="2025-09-10T04:49:45.375116021Z" level=info msg="CreateContainer within sandbox \"2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 04:49:45.386783 containerd[1545]: time="2025-09-10T04:49:45.385644020Z" level=info msg="Container b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:45.394069 containerd[1545]: time="2025-09-10T04:49:45.394020948Z" level=info msg="CreateContainer within sandbox \"2664997c3e4badefde97feaf43a9658bf21430ae6b23f1bae3d394d94d22e168\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8\"" Sep 10 04:49:45.394529 containerd[1545]: time="2025-09-10T04:49:45.394497586Z" level=info msg="StartContainer for \"b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8\"" Sep 10 04:49:45.395850 containerd[1545]: time="2025-09-10T04:49:45.395818821Z" level=info msg="connecting to shim b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8" address="unix:///run/containerd/s/dc70f08605e7211b678d80cfacf9f430c059b9120345833423019047a299cb6e" protocol=ttrpc version=3 Sep 10 04:49:45.420725 systemd[1]: Started cri-containerd-b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8.scope - libcontainer container b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8. Sep 10 04:49:45.461689 containerd[1545]: time="2025-09-10T04:49:45.461642807Z" level=info msg="StartContainer for \"b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8\" returns successfully" Sep 10 04:49:46.341103 containerd[1545]: time="2025-09-10T04:49:46.341054368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:46.341751 containerd[1545]: time="2025-09-10T04:49:46.341707646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 04:49:46.342357 containerd[1545]: time="2025-09-10T04:49:46.342329083Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:46.345338 containerd[1545]: time="2025-09-10T04:49:46.345302192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:49:46.345762 containerd[1545]: time="2025-09-10T04:49:46.345736471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 980.150293ms" Sep 10 04:49:46.345796 containerd[1545]: time="2025-09-10T04:49:46.345769431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 04:49:46.348119 containerd[1545]: time="2025-09-10T04:49:46.348088182Z" level=info msg="CreateContainer within sandbox \"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 04:49:46.354059 containerd[1545]: time="2025-09-10T04:49:46.354024760Z" level=info msg="Container 58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:49:46.366211 containerd[1545]: time="2025-09-10T04:49:46.366161914Z" level=info msg="CreateContainer within sandbox \"537cc7343b1816db511319ba6bf309d3e0bf40def3468ce8d024356d4abd7bdd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639\"" Sep 10 04:49:46.366712 containerd[1545]: time="2025-09-10T04:49:46.366687152Z" level=info msg="StartContainer for \"58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639\"" Sep 10 04:49:46.368494 containerd[1545]: time="2025-09-10T04:49:46.368463945Z" level=info msg="connecting to shim 58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639" address="unix:///run/containerd/s/a30bb24da0f15f560cf3e91aa3c047f2696d65cf0c5ff8e6ace3e258c5ba23ed" protocol=ttrpc version=3 Sep 10 04:49:46.389681 systemd[1]: Started cri-containerd-58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639.scope - libcontainer container 58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639. Sep 10 04:49:46.460715 containerd[1545]: time="2025-09-10T04:49:46.460668119Z" level=info msg="StartContainer for \"58c66c86e89029a3f616a292e66087c474177f956251aad44f5a3f607ac88639\" returns successfully" Sep 10 04:49:46.500419 kubelet[2659]: I0910 04:49:46.500353 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4t2bk" podStartSLOduration=20.761989767 podStartE2EDuration="26.50032401s" podCreationTimestamp="2025-09-10 04:49:20 +0000 UTC" firstStartedPulling="2025-09-10 04:49:40.608100745 +0000 UTC m=+40.414051946" lastFinishedPulling="2025-09-10 04:49:46.346434988 +0000 UTC m=+46.152386189" observedRunningTime="2025-09-10 04:49:46.487801857 +0000 UTC m=+46.293753058" watchObservedRunningTime="2025-09-10 04:49:46.50032401 +0000 UTC m=+46.306275211" Sep 10 04:49:46.501888 kubelet[2659]: I0910 04:49:46.501848 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-675f4f5db4-jgx46" podStartSLOduration=22.681762446 podStartE2EDuration="26.501837084s" podCreationTimestamp="2025-09-10 04:49:20 +0000 UTC" firstStartedPulling="2025-09-10 04:49:41.544817022 +0000 UTC m=+41.350768223" lastFinishedPulling="2025-09-10 04:49:45.36489166 +0000 UTC m=+45.170842861" observedRunningTime="2025-09-10 04:49:46.500784808 +0000 UTC m=+46.306735969" watchObservedRunningTime="2025-09-10 04:49:46.501837084 +0000 UTC m=+46.307788285" Sep 10 04:49:46.506570 containerd[1545]: time="2025-09-10T04:49:46.506512547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8\" id:\"7076aab0d49608eb9f02083d1c202dc40ac889688b073b823c2d0a9a714e2c07\" pid:5257 exited_at:{seconds:1757479786 nanos:506223348}" Sep 10 04:49:47.344269 kubelet[2659]: I0910 04:49:47.344093 2659 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 04:49:47.344269 kubelet[2659]: I0910 04:49:47.344138 2659 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 04:49:50.032123 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:46626.service - OpenSSH per-connection server daemon (10.0.0.1:46626). Sep 10 04:49:50.084957 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 46626 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:50.086388 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:50.090604 systemd-logind[1516]: New session 10 of user core. Sep 10 04:49:50.096701 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 04:49:50.292147 sshd[5275]: Connection closed by 10.0.0.1 port 46626 Sep 10 04:49:50.292843 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:50.300995 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:46626.service: Deactivated successfully. Sep 10 04:49:50.303422 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 04:49:50.304677 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Sep 10 04:49:50.311061 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). Sep 10 04:49:50.314079 systemd-logind[1516]: Removed session 10. Sep 10 04:49:50.375667 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:50.376840 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:50.382580 systemd-logind[1516]: New session 11 of user core. Sep 10 04:49:50.393671 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 04:49:50.632461 sshd[5298]: Connection closed by 10.0.0.1 port 46636 Sep 10 04:49:50.633740 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:50.644412 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:46636.service: Deactivated successfully. Sep 10 04:49:50.649006 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 04:49:50.649680 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Sep 10 04:49:50.653310 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:46646.service - OpenSSH per-connection server daemon (10.0.0.1:46646). Sep 10 04:49:50.655129 systemd-logind[1516]: Removed session 11. Sep 10 04:49:50.703773 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 46646 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:50.704852 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:50.708586 systemd-logind[1516]: New session 12 of user core. Sep 10 04:49:50.721684 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 04:49:50.856554 sshd[5316]: Connection closed by 10.0.0.1 port 46646 Sep 10 04:49:50.856874 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:50.859626 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:46646.service: Deactivated successfully. Sep 10 04:49:50.861382 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 04:49:50.863129 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Sep 10 04:49:50.864070 systemd-logind[1516]: Removed session 12. Sep 10 04:49:55.875932 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:46662.service - OpenSSH per-connection server daemon (10.0.0.1:46662). Sep 10 04:49:55.939753 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 46662 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:55.940933 sshd-session[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:55.944903 systemd-logind[1516]: New session 13 of user core. Sep 10 04:49:55.952666 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 04:49:56.106211 sshd[5340]: Connection closed by 10.0.0.1 port 46662 Sep 10 04:49:56.106525 sshd-session[5337]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:56.117667 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:46662.service: Deactivated successfully. Sep 10 04:49:56.119435 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 04:49:56.120229 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Sep 10 04:49:56.122798 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Sep 10 04:49:56.123397 systemd-logind[1516]: Removed session 13. Sep 10 04:49:56.181156 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:56.182234 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:56.186369 systemd-logind[1516]: New session 14 of user core. Sep 10 04:49:56.191704 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 04:49:56.411304 sshd[5356]: Connection closed by 10.0.0.1 port 46670 Sep 10 04:49:56.411821 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:56.424461 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:46670.service: Deactivated successfully. Sep 10 04:49:56.425942 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 04:49:56.426672 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Sep 10 04:49:56.428774 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:46682.service - OpenSSH per-connection server daemon (10.0.0.1:46682). Sep 10 04:49:56.429483 systemd-logind[1516]: Removed session 14. Sep 10 04:49:56.479740 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 46682 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:56.480787 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:56.484604 systemd-logind[1516]: New session 15 of user core. Sep 10 04:49:56.490662 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 04:49:57.958111 sshd[5371]: Connection closed by 10.0.0.1 port 46682 Sep 10 04:49:57.958894 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:57.970952 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:46682.service: Deactivated successfully. Sep 10 04:49:57.974064 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 04:49:57.974244 systemd[1]: session-15.scope: Consumed 532ms CPU time, 70.9M memory peak. Sep 10 04:49:57.975703 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Sep 10 04:49:57.979133 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:46692.service - OpenSSH per-connection server daemon (10.0.0.1:46692). Sep 10 04:49:57.981922 systemd-logind[1516]: Removed session 15. Sep 10 04:49:58.045453 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 46692 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:58.046712 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:58.050794 systemd-logind[1516]: New session 16 of user core. Sep 10 04:49:58.070694 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 04:49:58.380976 sshd[5395]: Connection closed by 10.0.0.1 port 46692 Sep 10 04:49:58.381724 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:58.389264 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:46692.service: Deactivated successfully. Sep 10 04:49:58.392110 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 04:49:58.394146 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Sep 10 04:49:58.396230 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:46698.service - OpenSSH per-connection server daemon (10.0.0.1:46698). Sep 10 04:49:58.398747 systemd-logind[1516]: Removed session 16. Sep 10 04:49:58.452567 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 46698 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:49:58.454001 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:49:58.458552 systemd-logind[1516]: New session 17 of user core. Sep 10 04:49:58.464840 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 04:49:58.595156 sshd[5410]: Connection closed by 10.0.0.1 port 46698 Sep 10 04:49:58.595687 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Sep 10 04:49:58.599019 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:46698.service: Deactivated successfully. Sep 10 04:49:58.601080 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 04:49:58.601955 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Sep 10 04:49:58.603161 systemd-logind[1516]: Removed session 17. Sep 10 04:49:59.005411 containerd[1545]: time="2025-09-10T04:49:59.005285469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b81b0a3cb9595413eb42eea29f4b6e7ced46b66f5113d90bc34bc4ad7d805ce8\" id:\"e84400b6db04f6c4e71b2672b741fea34cdaefbc4394e742bcd1f12e0316078e\" pid:5436 exited_at:{seconds:1757479799 nanos:5055550}" Sep 10 04:50:03.612961 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:33198.service - OpenSSH per-connection server daemon (10.0.0.1:33198). Sep 10 04:50:03.672905 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 33198 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:50:03.674018 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:50:03.677708 systemd-logind[1516]: New session 18 of user core. Sep 10 04:50:03.685890 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 04:50:03.810824 sshd[5456]: Connection closed by 10.0.0.1 port 33198 Sep 10 04:50:03.811175 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Sep 10 04:50:03.816031 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:33198.service: Deactivated successfully. Sep 10 04:50:03.818033 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 04:50:03.819568 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Sep 10 04:50:03.820578 systemd-logind[1516]: Removed session 18. Sep 10 04:50:08.835714 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:33202.service - OpenSSH per-connection server daemon (10.0.0.1:33202). Sep 10 04:50:08.892602 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 33202 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:50:08.893996 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:50:08.897727 systemd-logind[1516]: New session 19 of user core. Sep 10 04:50:08.905861 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 04:50:09.069863 sshd[5478]: Connection closed by 10.0.0.1 port 33202 Sep 10 04:50:09.070679 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Sep 10 04:50:09.078685 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:33202.service: Deactivated successfully. Sep 10 04:50:09.082053 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 04:50:09.082904 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Sep 10 04:50:09.084573 systemd-logind[1516]: Removed session 19. Sep 10 04:50:10.303470 containerd[1545]: time="2025-09-10T04:50:10.303418035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d4fa39b06d2662b98ebc3cac39f9951b2032d6e53c1460242c4a58f883d8c5f\" id:\"ef9ce54ee303e180d4e799855cf2c22a58153e393cff49371d0456f3b1dafd66\" pid:5505 exited_at:{seconds:1757479810 nanos:302940116}" Sep 10 04:50:14.082891 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:45840.service - OpenSSH per-connection server daemon (10.0.0.1:45840). Sep 10 04:50:14.144336 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:50:14.145529 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:50:14.149601 systemd-logind[1516]: New session 20 of user core. Sep 10 04:50:14.155674 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 04:50:14.302659 sshd[5530]: Connection closed by 10.0.0.1 port 45840 Sep 10 04:50:14.302368 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Sep 10 04:50:14.307790 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:45840.service: Deactivated successfully. Sep 10 04:50:14.309687 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 04:50:14.310472 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Sep 10 04:50:14.311484 systemd-logind[1516]: Removed session 20. Sep 10 04:50:14.818441 containerd[1545]: time="2025-09-10T04:50:14.818396455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1ede1ccee3477941c5ac1d5e874befa135a7e43c95f9da21a8045fbc6276c25\" id:\"7dd1dac46d5c9e3fa8d49a5292cbff6f503ba426e554ca8e7cd8f9d4d89cb9b0\" pid:5554 exited_at:{seconds:1757479814 nanos:817468736}" Sep 10 04:50:15.385687 kubelet[2659]: I0910 04:50:15.385641 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"