Sep 10 23:39:05.777718 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:39:05.777741 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:39:05.777751 kernel: KASLR enabled Sep 10 23:39:05.777756 kernel: efi: EFI v2.7 by EDK II Sep 10 23:39:05.777762 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 10 23:39:05.777767 kernel: random: crng init done Sep 10 23:39:05.777774 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 10 23:39:05.777780 kernel: secureboot: Secure boot enabled Sep 10 23:39:05.777785 kernel: ACPI: Early table checksum verification disabled Sep 10 23:39:05.777793 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 10 23:39:05.777799 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:39:05.777805 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777812 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777818 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777825 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777833 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777840 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777846 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777853 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777860 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:39:05.777866 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:39:05.777872 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:39:05.777878 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:39:05.777884 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 10 23:39:05.777890 kernel: Zone ranges: Sep 10 23:39:05.777898 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:39:05.777904 kernel: DMA32 empty Sep 10 23:39:05.777910 kernel: Normal empty Sep 10 23:39:05.777917 kernel: Device empty Sep 10 23:39:05.777922 kernel: Movable zone start for each node Sep 10 23:39:05.777928 kernel: Early memory node ranges Sep 10 23:39:05.777940 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 10 23:39:05.777947 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 10 23:39:05.777953 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 10 23:39:05.777959 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 10 23:39:05.777965 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 10 23:39:05.777971 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 10 23:39:05.777978 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 10 23:39:05.777984 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 10 23:39:05.777990 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:39:05.777999 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:39:05.778006 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:39:05.778012 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 10 23:39:05.778019 kernel: psci: probing for conduit method from ACPI. Sep 10 23:39:05.778026 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:39:05.778033 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:39:05.778039 kernel: psci: Trusted OS migration not required Sep 10 23:39:05.778046 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:39:05.778053 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:39:05.778060 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:39:05.778067 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:39:05.778074 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:39:05.778080 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:39:05.778088 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:39:05.778094 kernel: CPU features: detected: Spectre-v4 Sep 10 23:39:05.778101 kernel: CPU features: detected: Spectre-BHB Sep 10 23:39:05.778107 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:39:05.778114 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:39:05.778120 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:39:05.778127 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:39:05.778133 kernel: alternatives: applying boot alternatives Sep 10 23:39:05.778140 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:39:05.778147 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:39:05.778154 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:39:05.778161 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:39:05.778168 kernel: Fallback order for Node 0: 0 Sep 10 23:39:05.778174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:39:05.778181 kernel: Policy zone: DMA Sep 10 23:39:05.778187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:39:05.778194 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:39:05.778200 kernel: software IO TLB: area num 4. Sep 10 23:39:05.778206 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:39:05.778213 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 10 23:39:05.778219 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:39:05.778226 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:39:05.778233 kernel: rcu: RCU event tracing is enabled. Sep 10 23:39:05.778241 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:39:05.778248 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:39:05.778254 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:39:05.778261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:39:05.778267 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:39:05.778274 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:39:05.778281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:39:05.778287 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:39:05.778294 kernel: GICv3: 256 SPIs implemented Sep 10 23:39:05.778300 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:39:05.778307 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:39:05.778314 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:39:05.778321 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:39:05.778327 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:39:05.778333 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:39:05.778340 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:39:05.778347 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:39:05.778353 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:39:05.778360 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:39:05.778367 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:39:05.778373 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:39:05.778379 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:39:05.778394 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:39:05.778402 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:39:05.778409 kernel: arm-pv: using stolen time PV Sep 10 23:39:05.778416 kernel: Console: colour dummy device 80x25 Sep 10 23:39:05.778423 kernel: ACPI: Core revision 20240827 Sep 10 23:39:05.778429 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:39:05.778436 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:39:05.778443 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:39:05.778450 kernel: landlock: Up and running. Sep 10 23:39:05.778456 kernel: SELinux: Initializing. Sep 10 23:39:05.778473 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:39:05.778481 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:39:05.778487 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:39:05.778494 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:39:05.778501 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:39:05.778508 kernel: Remapping and enabling EFI services. Sep 10 23:39:05.778515 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:39:05.778521 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:39:05.778528 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:39:05.778536 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:39:05.778548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:39:05.778555 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:39:05.778563 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:39:05.778571 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:39:05.778578 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:39:05.778586 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:39:05.778593 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:39:05.778600 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:39:05.778608 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:39:05.778615 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:39:05.778622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:39:05.778629 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:39:05.778636 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:39:05.778650 kernel: SMP: Total of 4 processors activated. Sep 10 23:39:05.778657 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:39:05.778671 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:39:05.778678 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:39:05.778690 kernel: CPU features: detected: Common not Private translations Sep 10 23:39:05.778697 kernel: CPU features: detected: CRC32 instructions Sep 10 23:39:05.778704 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:39:05.778711 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:39:05.778718 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:39:05.778739 kernel: CPU features: detected: Privileged Access Never Sep 10 23:39:05.778746 kernel: CPU features: detected: RAS Extension Support Sep 10 23:39:05.778753 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:39:05.778761 kernel: alternatives: applying system-wide alternatives Sep 10 23:39:05.778770 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:39:05.778777 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 10 23:39:05.778785 kernel: devtmpfs: initialized Sep 10 23:39:05.778792 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:39:05.778799 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:39:05.778806 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:39:05.778813 kernel: 0 pages in range for non-PLT usage Sep 10 23:39:05.778820 kernel: 508560 pages in range for PLT usage Sep 10 23:39:05.778827 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:39:05.778836 kernel: SMBIOS 3.0.0 present. Sep 10 23:39:05.778843 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:39:05.778850 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:39:05.778857 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:39:05.778864 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:39:05.778871 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:39:05.778878 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:39:05.778885 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:39:05.778893 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 10 23:39:05.778901 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:39:05.778908 kernel: cpuidle: using governor menu Sep 10 23:39:05.778915 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:39:05.778922 kernel: ASID allocator initialised with 32768 entries Sep 10 23:39:05.778929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:39:05.778936 kernel: Serial: AMBA PL011 UART driver Sep 10 23:39:05.778943 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:39:05.778950 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:39:05.778957 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:39:05.778965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:39:05.778972 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:39:05.778979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:39:05.778986 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:39:05.778993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:39:05.779000 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:39:05.779008 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:39:05.779016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:39:05.779023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:39:05.779031 kernel: ACPI: Interpreter enabled Sep 10 23:39:05.779038 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:39:05.779046 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:39:05.779052 kernel: ACPI: CPU0 has been hot-added Sep 10 23:39:05.779059 kernel: ACPI: CPU1 has been hot-added Sep 10 23:39:05.779066 kernel: ACPI: CPU2 has been hot-added Sep 10 23:39:05.779073 kernel: ACPI: CPU3 has been hot-added Sep 10 23:39:05.779080 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:39:05.779087 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:39:05.779095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:39:05.779230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:39:05.779298 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:39:05.779360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:39:05.779424 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:39:05.779555 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:39:05.779566 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:39:05.779577 kernel: PCI host bridge to bus 0000:00 Sep 10 23:39:05.779653 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:39:05.779727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:39:05.779784 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:39:05.779838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:39:05.779924 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:39:05.779998 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:39:05.780081 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:39:05.780147 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:39:05.780212 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:39:05.780274 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:39:05.780333 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:39:05.780394 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:39:05.780452 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:39:05.780523 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:39:05.780579 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:39:05.780588 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:39:05.780595 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:39:05.780603 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:39:05.780610 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:39:05.780617 kernel: iommu: Default domain type: Translated Sep 10 23:39:05.780626 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:39:05.780633 kernel: efivars: Registered efivars operations Sep 10 23:39:05.780640 kernel: vgaarb: loaded Sep 10 23:39:05.780647 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:39:05.780654 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:39:05.780666 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:39:05.780674 kernel: pnp: PnP ACPI init Sep 10 23:39:05.780758 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:39:05.780769 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:39:05.780780 kernel: NET: Registered PF_INET protocol family Sep 10 23:39:05.780787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:39:05.780794 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:39:05.780802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:39:05.780809 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:39:05.780816 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:39:05.780824 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:39:05.780831 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:39:05.780838 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:39:05.780847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:39:05.780854 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:39:05.780861 kernel: kvm [1]: HYP mode not available Sep 10 23:39:05.780868 kernel: Initialise system trusted keyrings Sep 10 23:39:05.780875 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:39:05.780882 kernel: Key type asymmetric registered Sep 10 23:39:05.780889 kernel: Asymmetric key parser 'x509' registered Sep 10 23:39:05.780896 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:39:05.780903 kernel: io scheduler mq-deadline registered Sep 10 23:39:05.780912 kernel: io scheduler kyber registered Sep 10 23:39:05.780919 kernel: io scheduler bfq registered Sep 10 23:39:05.780926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:39:05.780933 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:39:05.780940 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:39:05.781003 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:39:05.781013 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:39:05.781020 kernel: thunder_xcv, ver 1.0 Sep 10 23:39:05.781027 kernel: thunder_bgx, ver 1.0 Sep 10 23:39:05.781037 kernel: nicpf, ver 1.0 Sep 10 23:39:05.781044 kernel: nicvf, ver 1.0 Sep 10 23:39:05.781121 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:39:05.781178 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:39:05 UTC (1757547545) Sep 10 23:39:05.781187 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:39:05.781194 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:39:05.781201 kernel: watchdog: NMI not fully supported Sep 10 23:39:05.781208 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:39:05.781217 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:39:05.781224 kernel: Segment Routing with IPv6 Sep 10 23:39:05.781231 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:39:05.781238 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:39:05.781245 kernel: Key type dns_resolver registered Sep 10 23:39:05.781252 kernel: registered taskstats version 1 Sep 10 23:39:05.781259 kernel: Loading compiled-in X.509 certificates Sep 10 23:39:05.781266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:39:05.781274 kernel: Demotion targets for Node 0: null Sep 10 23:39:05.781282 kernel: Key type .fscrypt registered Sep 10 23:39:05.781289 kernel: Key type fscrypt-provisioning registered Sep 10 23:39:05.781296 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:39:05.781303 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:39:05.781310 kernel: ima: No architecture policies found Sep 10 23:39:05.781318 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:39:05.781325 kernel: clk: Disabling unused clocks Sep 10 23:39:05.781332 kernel: PM: genpd: Disabling unused power domains Sep 10 23:39:05.781339 kernel: Warning: unable to open an initial console. Sep 10 23:39:05.781347 kernel: Freeing unused kernel memory: 38976K Sep 10 23:39:05.781354 kernel: Run /init as init process Sep 10 23:39:05.781361 kernel: with arguments: Sep 10 23:39:05.781369 kernel: /init Sep 10 23:39:05.781376 kernel: with environment: Sep 10 23:39:05.781383 kernel: HOME=/ Sep 10 23:39:05.781390 kernel: TERM=linux Sep 10 23:39:05.781397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:39:05.781405 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:39:05.781417 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:39:05.781425 systemd[1]: Detected virtualization kvm. Sep 10 23:39:05.781433 systemd[1]: Detected architecture arm64. Sep 10 23:39:05.781440 systemd[1]: Running in initrd. Sep 10 23:39:05.781447 systemd[1]: No hostname configured, using default hostname. Sep 10 23:39:05.781455 systemd[1]: Hostname set to . Sep 10 23:39:05.781463 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:39:05.781498 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:39:05.781506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:39:05.781513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:39:05.781522 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:39:05.781537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:39:05.781544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:39:05.781553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:39:05.781563 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:39:05.781571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:39:05.781579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:39:05.781586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:39:05.781594 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:39:05.781605 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:39:05.781613 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:39:05.781621 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:39:05.781630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:39:05.781637 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:39:05.781645 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:39:05.781653 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:39:05.781667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:39:05.781675 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:39:05.781683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:39:05.781691 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:39:05.781699 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:39:05.781708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:39:05.781716 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:39:05.781724 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:39:05.781731 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:39:05.781739 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:39:05.781746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:39:05.781754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:39:05.781762 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:39:05.781771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:39:05.781779 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:39:05.781787 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:39:05.781813 systemd-journald[245]: Collecting audit messages is disabled. Sep 10 23:39:05.781835 systemd-journald[245]: Journal started Sep 10 23:39:05.781853 systemd-journald[245]: Runtime Journal (/run/log/journal/a3919978b11c47f1bd21551b344691fc) is 6M, max 48.5M, 42.4M free. Sep 10 23:39:05.786611 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:39:05.773111 systemd-modules-load[246]: Inserted module 'overlay' Sep 10 23:39:05.789216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:39:05.789831 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 10 23:39:05.790822 kernel: Bridge firewalling registered Sep 10 23:39:05.792708 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:39:05.793156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:39:05.795182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:39:05.799131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:39:05.800771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:39:05.802377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:39:05.812125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:39:05.819243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:39:05.821078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:39:05.821804 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:39:05.824949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:39:05.827979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:39:05.831719 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:39:05.834597 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:39:05.856954 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:39:05.871364 systemd-resolved[288]: Positive Trust Anchors: Sep 10 23:39:05.871384 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:39:05.871416 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:39:05.876386 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 10 23:39:05.878878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:39:05.881784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:39:05.931502 kernel: SCSI subsystem initialized Sep 10 23:39:05.936481 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:39:05.943496 kernel: iscsi: registered transport (tcp) Sep 10 23:39:05.956493 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:39:05.956524 kernel: QLogic iSCSI HBA Driver Sep 10 23:39:05.973310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:39:05.996523 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:39:05.998488 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:39:06.044224 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:39:06.046384 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:39:06.102505 kernel: raid6: neonx8 gen() 15769 MB/s Sep 10 23:39:06.119482 kernel: raid6: neonx4 gen() 15791 MB/s Sep 10 23:39:06.136482 kernel: raid6: neonx2 gen() 13207 MB/s Sep 10 23:39:06.153483 kernel: raid6: neonx1 gen() 10412 MB/s Sep 10 23:39:06.170484 kernel: raid6: int64x8 gen() 6897 MB/s Sep 10 23:39:06.187482 kernel: raid6: int64x4 gen() 7343 MB/s Sep 10 23:39:06.204483 kernel: raid6: int64x2 gen() 6102 MB/s Sep 10 23:39:06.221496 kernel: raid6: int64x1 gen() 5044 MB/s Sep 10 23:39:06.221516 kernel: raid6: using algorithm neonx4 gen() 15791 MB/s Sep 10 23:39:06.238493 kernel: raid6: .... xor() 12333 MB/s, rmw enabled Sep 10 23:39:06.238514 kernel: raid6: using neon recovery algorithm Sep 10 23:39:06.243641 kernel: xor: measuring software checksum speed Sep 10 23:39:06.243670 kernel: 8regs : 21658 MB/sec Sep 10 23:39:06.244824 kernel: 32regs : 21681 MB/sec Sep 10 23:39:06.244841 kernel: arm64_neon : 28080 MB/sec Sep 10 23:39:06.244850 kernel: xor: using function: arm64_neon (28080 MB/sec) Sep 10 23:39:06.297497 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:39:06.303919 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:39:06.306263 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:39:06.339690 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 23:39:06.344185 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:39:06.346115 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:39:06.373356 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 10 23:39:06.397403 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:39:06.399628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:39:06.459002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:39:06.461269 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:39:06.518375 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:39:06.519193 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:39:06.522573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:39:06.522589 kernel: GPT:9289727 != 19775487 Sep 10 23:39:06.522598 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:39:06.523696 kernel: GPT:9289727 != 19775487 Sep 10 23:39:06.525582 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:39:06.525620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:39:06.544609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:39:06.545728 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:39:06.550015 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:39:06.554992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:39:06.573423 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:39:06.582409 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:39:06.584520 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:39:06.585635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:39:06.604123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:39:06.610244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:39:06.611357 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:39:06.613368 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:39:06.615901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:39:06.617672 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:39:06.620117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:39:06.621838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:39:06.641640 disk-uuid[592]: Primary Header is updated. Sep 10 23:39:06.641640 disk-uuid[592]: Secondary Entries is updated. Sep 10 23:39:06.641640 disk-uuid[592]: Secondary Header is updated. Sep 10 23:39:06.645456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:39:06.648032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:39:07.656497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:39:07.656604 disk-uuid[598]: The operation has completed successfully. Sep 10 23:39:07.684756 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:39:07.684877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:39:07.709313 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:39:07.738554 sh[613]: Success Sep 10 23:39:07.752861 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:39:07.752919 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:39:07.753918 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:39:07.761500 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:39:07.790149 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:39:07.793885 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:39:07.815635 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:39:07.821156 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 10 23:39:07.821198 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:39:07.821210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:39:07.825709 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:39:07.825734 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:39:07.826781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:39:07.827957 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:39:07.829104 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:39:07.829953 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:39:07.832801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:39:07.858095 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 10 23:39:07.858149 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:39:07.859017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:39:07.861751 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:39:07.861796 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:39:07.866485 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:39:07.867507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:39:07.869831 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:39:07.938182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:39:07.942660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:39:07.978417 ignition[699]: Ignition 2.21.0 Sep 10 23:39:07.978434 ignition[699]: Stage: fetch-offline Sep 10 23:39:07.978480 ignition[699]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:07.978489 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:07.978684 ignition[699]: parsed url from cmdline: "" Sep 10 23:39:07.978688 ignition[699]: no config URL provided Sep 10 23:39:07.978692 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:39:07.978698 ignition[699]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:39:07.978718 ignition[699]: op(1): [started] loading QEMU firmware config module Sep 10 23:39:07.986221 systemd-networkd[805]: lo: Link UP Sep 10 23:39:07.978724 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:39:07.986224 systemd-networkd[805]: lo: Gained carrier Sep 10 23:39:07.987211 ignition[699]: op(1): [finished] loading QEMU firmware config module Sep 10 23:39:07.987047 systemd-networkd[805]: Enumeration completed Sep 10 23:39:07.987594 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:39:07.988834 systemd[1]: Reached target network.target - Network. Sep 10 23:39:07.988993 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:39:07.988997 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:39:07.989477 systemd-networkd[805]: eth0: Link UP Sep 10 23:39:07.989856 systemd-networkd[805]: eth0: Gained carrier Sep 10 23:39:07.989866 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:39:08.000553 systemd-networkd[805]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:39:08.042723 ignition[699]: parsing config with SHA512: a69ad50fc692f4188256769eb061b7947df1fed32849fee64f6e0f4ff5a43bb79acb876a6cc4aff1ea1034bb46ebf278a417e72582684f25d15dbeac14345142 Sep 10 23:39:08.047142 unknown[699]: fetched base config from "system" Sep 10 23:39:08.047153 unknown[699]: fetched user config from "qemu" Sep 10 23:39:08.047621 ignition[699]: fetch-offline: fetch-offline passed Sep 10 23:39:08.049157 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:39:08.047704 ignition[699]: Ignition finished successfully Sep 10 23:39:08.050633 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:39:08.051569 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:39:08.086579 ignition[814]: Ignition 2.21.0 Sep 10 23:39:08.086594 ignition[814]: Stage: kargs Sep 10 23:39:08.086751 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:08.086761 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:08.088944 ignition[814]: kargs: kargs passed Sep 10 23:39:08.089009 ignition[814]: Ignition finished successfully Sep 10 23:39:08.093530 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:39:08.096328 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:39:08.127594 ignition[822]: Ignition 2.21.0 Sep 10 23:39:08.127612 ignition[822]: Stage: disks Sep 10 23:39:08.127774 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:08.127783 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:08.129140 ignition[822]: disks: disks passed Sep 10 23:39:08.131025 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:39:08.129201 ignition[822]: Ignition finished successfully Sep 10 23:39:08.134219 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:39:08.135229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:39:08.137582 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:39:08.138787 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:39:08.140540 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:39:08.142997 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:39:08.177698 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:39:08.183274 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:39:08.188436 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:39:08.253511 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:39:08.254223 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:39:08.256074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:39:08.259121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:39:08.260753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:39:08.261595 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:39:08.261635 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:39:08.261667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:39:08.276245 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:39:08.278853 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:39:08.281118 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 10 23:39:08.283492 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:39:08.283526 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:39:08.285478 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:39:08.285504 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:39:08.287297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:39:08.319981 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:39:08.323357 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:39:08.327752 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:39:08.330516 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:39:08.408516 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:39:08.410308 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:39:08.411759 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:39:08.431495 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:39:08.440754 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:39:08.463130 ignition[954]: INFO : Ignition 2.21.0 Sep 10 23:39:08.463130 ignition[954]: INFO : Stage: mount Sep 10 23:39:08.464489 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:08.464489 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:08.466137 ignition[954]: INFO : mount: mount passed Sep 10 23:39:08.466137 ignition[954]: INFO : Ignition finished successfully Sep 10 23:39:08.466970 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:39:08.469008 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:39:08.526010 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.10 Sep 10 23:39:08.526025 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Sep 10 23:39:08.820124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:39:08.822930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:39:08.845486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 10 23:39:08.847485 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:39:08.847570 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:39:08.849950 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:39:08.849980 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:39:08.851414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:39:08.878781 ignition[983]: INFO : Ignition 2.21.0 Sep 10 23:39:08.878781 ignition[983]: INFO : Stage: files Sep 10 23:39:08.881354 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:08.881354 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:08.884727 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:39:08.884727 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:39:08.884727 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:39:08.888221 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:39:08.888221 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:39:08.888221 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:39:08.888221 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:39:08.888221 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 23:39:08.886581 unknown[983]: wrote ssh authorized keys file for user: core Sep 10 23:39:08.956764 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:39:09.325085 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:39:09.326789 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:39:09.338708 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:39:09.338708 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:39:09.338708 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:39:09.343397 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:39:09.343397 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:39:09.343397 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 23:39:09.655880 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:39:10.000990 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:39:10.000990 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:39:10.004398 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:39:10.001571 systemd-networkd[805]: eth0: Gained IPv6LL Sep 10 23:39:10.007882 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:39:10.007882 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:39:10.007882 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:39:10.011991 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:39:10.011991 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:39:10.011991 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:39:10.011991 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:39:10.022667 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:39:10.026309 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:39:10.028896 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:39:10.028896 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:39:10.028896 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:39:10.028896 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:39:10.028896 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:39:10.028896 ignition[983]: INFO : files: files passed Sep 10 23:39:10.028896 ignition[983]: INFO : Ignition finished successfully Sep 10 23:39:10.029549 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:39:10.032549 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:39:10.034109 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:39:10.052298 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:39:10.052410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:39:10.055577 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:39:10.057098 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:39:10.057098 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:39:10.059869 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:39:10.059120 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:39:10.061090 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:39:10.065509 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:39:10.104641 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:39:10.105564 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:39:10.108035 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:39:10.109017 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:39:10.110668 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:39:10.111550 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:39:10.143744 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:39:10.146397 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:39:10.166936 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:39:10.168239 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:39:10.170179 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:39:10.171697 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:39:10.171827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:39:10.174166 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:39:10.176001 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:39:10.177420 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:39:10.179036 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:39:10.180709 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:39:10.182451 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:39:10.184251 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:39:10.186034 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:39:10.187744 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:39:10.189350 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:39:10.191086 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:39:10.192395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:39:10.192545 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:39:10.194647 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:39:10.196307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:39:10.197952 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:39:10.201546 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:39:10.202693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:39:10.202826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:39:10.205322 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:39:10.205442 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:39:10.207290 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:39:10.208533 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:39:10.208661 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:39:10.210525 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:39:10.211921 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:39:10.213338 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:39:10.213430 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:39:10.215445 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:39:10.215534 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:39:10.217087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:39:10.217204 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:39:10.218843 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:39:10.218942 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:39:10.221159 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:39:10.223117 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:39:10.224818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:39:10.224939 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:39:10.226782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:39:10.226878 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:39:10.231918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:39:10.234630 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:39:10.244907 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:39:10.248994 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:39:10.249099 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:39:10.253276 ignition[1039]: INFO : Ignition 2.21.0 Sep 10 23:39:10.253276 ignition[1039]: INFO : Stage: umount Sep 10 23:39:10.254712 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:39:10.254712 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:39:10.254712 ignition[1039]: INFO : umount: umount passed Sep 10 23:39:10.254712 ignition[1039]: INFO : Ignition finished successfully Sep 10 23:39:10.255904 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:39:10.256564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:39:10.258092 systemd[1]: Stopped target network.target - Network. Sep 10 23:39:10.259264 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:39:10.259332 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:39:10.260657 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:39:10.260701 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:39:10.262057 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:39:10.262111 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:39:10.263328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:39:10.263373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:39:10.264688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:39:10.264733 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:39:10.266180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:39:10.267561 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:39:10.271911 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:39:10.272019 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:39:10.275248 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:39:10.275507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:39:10.275545 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:39:10.278966 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:39:10.279194 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:39:10.279301 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:39:10.282339 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:39:10.282788 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:39:10.284238 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:39:10.284282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:39:10.286508 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:39:10.287256 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:39:10.287309 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:39:10.289170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:39:10.289214 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:39:10.290917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:39:10.290957 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:39:10.294875 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:39:10.300422 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:39:10.306983 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:39:10.307185 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:39:10.312299 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:39:10.312494 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:39:10.314398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:39:10.314440 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:39:10.316150 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:39:10.316183 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:39:10.317604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:39:10.317667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:39:10.320117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:39:10.320173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:39:10.322281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:39:10.322333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:39:10.326069 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:39:10.327941 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:39:10.328001 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:39:10.330800 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:39:10.330852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:39:10.333630 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 23:39:10.333678 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:39:10.336384 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:39:10.336428 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:39:10.338348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:39:10.338401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:39:10.342818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:39:10.342914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:39:10.344498 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:39:10.346454 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:39:10.355966 systemd[1]: Switching root. Sep 10 23:39:10.396758 systemd-journald[245]: Journal stopped Sep 10 23:39:11.191221 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 10 23:39:11.191274 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:39:11.191287 kernel: SELinux: policy capability open_perms=1 Sep 10 23:39:11.191300 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:39:11.191310 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:39:11.191323 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:39:11.191335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:39:11.191349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:39:11.191361 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:39:11.191371 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:39:11.191381 kernel: audit: type=1403 audit(1757547550.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:39:11.191395 systemd[1]: Successfully loaded SELinux policy in 47.926ms. Sep 10 23:39:11.191413 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.982ms. Sep 10 23:39:11.191425 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:39:11.191437 systemd[1]: Detected virtualization kvm. Sep 10 23:39:11.191448 systemd[1]: Detected architecture arm64. Sep 10 23:39:11.191458 systemd[1]: Detected first boot. Sep 10 23:39:11.191555 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:39:11.191567 zram_generator::config[1086]: No configuration found. Sep 10 23:39:11.191579 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:39:11.191588 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:39:11.191599 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:39:11.191610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:39:11.191628 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:39:11.191640 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:39:11.191652 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:39:11.191663 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:39:11.191673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:39:11.191684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:39:11.191695 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:39:11.191705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:39:11.191715 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:39:11.191726 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:39:11.191738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:39:11.191751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:39:11.191780 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:39:11.191790 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:39:11.191801 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:39:11.191812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:39:11.191822 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:39:11.191833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:39:11.191845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:39:11.191855 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:39:11.191866 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:39:11.191876 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:39:11.191887 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:39:11.191898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:39:11.191908 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:39:11.191919 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:39:11.191929 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:39:11.191940 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:39:11.191952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:39:11.191962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:39:11.191973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:39:11.191984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:39:11.191995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:39:11.192006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:39:11.192017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:39:11.192027 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:39:11.192038 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:39:11.192050 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:39:11.192066 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:39:11.192086 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:39:11.192097 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:39:11.192108 systemd[1]: Reached target machines.target - Containers. Sep 10 23:39:11.192119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:39:11.192129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:39:11.192140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:39:11.192153 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:39:11.192163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:39:11.192175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:39:11.192185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:39:11.192196 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:39:11.192205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:39:11.192216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:39:11.192227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:39:11.192238 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:39:11.192251 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:39:11.192262 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:39:11.192274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:39:11.192285 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:39:11.192296 kernel: ACPI: bus type drm_connector registered Sep 10 23:39:11.192306 kernel: fuse: init (API version 7.41) Sep 10 23:39:11.192315 kernel: loop: module loaded Sep 10 23:39:11.192325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:39:11.192338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:39:11.192349 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:39:11.192360 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:39:11.192371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:39:11.192382 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:39:11.192394 systemd[1]: Stopped verity-setup.service. Sep 10 23:39:11.192405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:39:11.192416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:39:11.192426 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:39:11.192437 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:39:11.192483 systemd-journald[1161]: Collecting audit messages is disabled. Sep 10 23:39:11.192510 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:39:11.192521 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:39:11.192546 systemd-journald[1161]: Journal started Sep 10 23:39:11.192570 systemd-journald[1161]: Runtime Journal (/run/log/journal/a3919978b11c47f1bd21551b344691fc) is 6M, max 48.5M, 42.4M free. Sep 10 23:39:10.965013 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:39:10.988546 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:39:10.988976 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:39:11.195608 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:39:11.197517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:39:11.200498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:39:11.201766 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:39:11.201936 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:39:11.203340 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:39:11.203536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:39:11.204704 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:39:11.204865 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:39:11.205993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:39:11.206173 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:39:11.207639 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:39:11.207811 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:39:11.208895 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:39:11.209056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:39:11.210214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:39:11.211652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:39:11.212960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:39:11.214366 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:39:11.227639 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:39:11.230277 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:39:11.232456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:39:11.233393 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:39:11.233425 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:39:11.235360 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:39:11.246416 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:39:11.247516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:39:11.248730 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:39:11.250583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:39:11.251735 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:39:11.254643 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:39:11.255765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:39:11.258720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:39:11.260296 systemd-journald[1161]: Time spent on flushing to /var/log/journal/a3919978b11c47f1bd21551b344691fc is 13.535ms for 884 entries. Sep 10 23:39:11.260296 systemd-journald[1161]: System Journal (/var/log/journal/a3919978b11c47f1bd21551b344691fc) is 8M, max 195.6M, 187.6M free. Sep 10 23:39:11.280071 systemd-journald[1161]: Received client request to flush runtime journal. Sep 10 23:39:11.261626 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:39:11.264226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:39:11.268972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:39:11.270402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:39:11.271907 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:39:11.280563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:39:11.282033 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:39:11.285610 kernel: loop0: detected capacity change from 0 to 107312 Sep 10 23:39:11.286074 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:39:11.290773 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:39:11.300821 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:39:11.313837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:39:11.320445 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 10 23:39:11.320480 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 10 23:39:11.321495 kernel: loop1: detected capacity change from 0 to 138376 Sep 10 23:39:11.327516 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:39:11.330573 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:39:11.342888 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:39:11.357512 kernel: loop2: detected capacity change from 0 to 211168 Sep 10 23:39:11.370953 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:39:11.375491 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:39:11.398499 kernel: loop3: detected capacity change from 0 to 107312 Sep 10 23:39:11.404550 kernel: loop4: detected capacity change from 0 to 138376 Sep 10 23:39:11.408179 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 10 23:39:11.408199 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Sep 10 23:39:11.412500 kernel: loop5: detected capacity change from 0 to 211168 Sep 10 23:39:11.413892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:39:11.419174 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:39:11.419728 (sd-merge)[1228]: Merged extensions into '/usr'. Sep 10 23:39:11.424632 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:39:11.424648 systemd[1]: Reloading... Sep 10 23:39:11.490482 zram_generator::config[1257]: No configuration found. Sep 10 23:39:11.539547 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:39:11.578648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:39:11.644084 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:39:11.644412 systemd[1]: Reloading finished in 219 ms. Sep 10 23:39:11.670228 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:39:11.671576 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:39:11.696005 systemd[1]: Starting ensure-sysext.service... Sep 10 23:39:11.697796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:39:11.710667 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:39:11.710687 systemd[1]: Reloading... Sep 10 23:39:11.714482 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:39:11.714516 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:39:11.714798 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:39:11.714998 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:39:11.715700 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:39:11.715916 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 10 23:39:11.715958 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Sep 10 23:39:11.719160 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:39:11.719173 systemd-tmpfiles[1291]: Skipping /boot Sep 10 23:39:11.728998 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:39:11.729013 systemd-tmpfiles[1291]: Skipping /boot Sep 10 23:39:11.759490 zram_generator::config[1318]: No configuration found. Sep 10 23:39:11.835619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:39:11.899846 systemd[1]: Reloading finished in 188 ms. Sep 10 23:39:11.920493 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:39:11.926858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:39:11.936233 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:39:11.938651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:39:11.940849 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:39:11.943747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:39:11.947761 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:39:11.950758 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:39:11.958278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:39:11.962521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:39:11.965862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:39:11.968234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:39:11.970744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:39:11.970927 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:39:11.973012 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:39:11.978716 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:39:11.985438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:39:11.985660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:39:11.990499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:39:11.992095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:39:11.994155 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:39:11.994325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:39:12.001841 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:39:12.002404 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 10 23:39:12.009125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:39:12.009331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:39:12.011212 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:39:12.020776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:39:12.022131 augenrules[1387]: No rules Sep 10 23:39:12.022922 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:39:12.023142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:39:12.025192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:39:12.027226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:39:12.027401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:39:12.029312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:39:12.031386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:39:12.034935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:39:12.055971 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:39:12.071807 systemd[1]: Finished ensure-sysext.service. Sep 10 23:39:12.084730 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:39:12.085844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:39:12.087995 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:39:12.102584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:39:12.105647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:39:12.110318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:39:12.112071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:39:12.112960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:39:12.115373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:39:12.120722 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:39:12.121657 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:39:12.126080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:39:12.126254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:39:12.127677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:39:12.127837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:39:12.129259 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:39:12.135686 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:39:12.135886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:39:12.146288 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:39:12.146518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:39:12.151620 augenrules[1434]: /sbin/augenrules: No change Sep 10 23:39:12.156208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:39:12.156281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:39:12.158099 augenrules[1467]: No rules Sep 10 23:39:12.159815 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:39:12.161761 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:39:12.168062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:39:12.172662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:39:12.202201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:39:12.267659 systemd-resolved[1357]: Positive Trust Anchors: Sep 10 23:39:12.267679 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:39:12.267713 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:39:12.277391 systemd-networkd[1446]: lo: Link UP Sep 10 23:39:12.277463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:39:12.277791 systemd-networkd[1446]: lo: Gained carrier Sep 10 23:39:12.278732 systemd-networkd[1446]: Enumeration completed Sep 10 23:39:12.279331 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:39:12.279417 systemd-networkd[1446]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:39:12.279561 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:39:12.279651 systemd-resolved[1357]: Defaulting to hostname 'linux'. Sep 10 23:39:12.280309 systemd-networkd[1446]: eth0: Link UP Sep 10 23:39:12.280524 systemd-networkd[1446]: eth0: Gained carrier Sep 10 23:39:12.280635 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:39:12.281624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:39:12.283179 systemd[1]: Reached target network.target - Network. Sep 10 23:39:12.284311 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:39:12.291696 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:39:12.294077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:39:12.297898 systemd-networkd[1446]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:39:12.310680 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:39:12.312913 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:39:12.743740 systemd-resolved[1357]: Clock change detected. Flushing caches. Sep 10 23:39:12.743847 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:39:12.743895 systemd-timesyncd[1448]: Initial clock synchronization to Wed 2025-09-10 23:39:12.743687 UTC. Sep 10 23:39:12.747306 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:39:12.773858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:39:12.775184 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:39:12.776327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:39:12.777328 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:39:12.778581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:39:12.779748 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:39:12.780823 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:39:12.781909 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:39:12.781947 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:39:12.782768 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:39:12.784722 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:39:12.787200 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:39:12.790611 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:39:12.791932 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:39:12.793135 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:39:12.798266 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:39:12.799859 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:39:12.801582 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:39:12.802545 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:39:12.803343 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:39:12.804127 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:39:12.804171 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:39:12.805356 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:39:12.807345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:39:12.809352 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:39:12.811387 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:39:12.814428 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:39:12.815444 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:39:12.817478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:39:12.819841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:39:12.823348 jq[1507]: false Sep 10 23:39:12.823512 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:39:12.827431 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:39:12.831373 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:39:12.833424 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:39:12.834092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:39:12.834644 extend-filesystems[1508]: Found /dev/vda6 Sep 10 23:39:12.834857 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:39:12.837785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:39:12.840590 extend-filesystems[1508]: Found /dev/vda9 Sep 10 23:39:12.842057 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:39:12.843607 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:39:12.844051 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:39:12.844534 extend-filesystems[1508]: Checking size of /dev/vda9 Sep 10 23:39:12.850930 jq[1526]: true Sep 10 23:39:12.852405 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:39:12.853270 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:39:12.856185 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:39:12.856441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:39:12.876685 update_engine[1525]: I20250910 23:39:12.876418 1525 main.cc:92] Flatcar Update Engine starting Sep 10 23:39:12.877169 extend-filesystems[1508]: Resized partition /dev/vda9 Sep 10 23:39:12.877427 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:39:12.882904 jq[1537]: true Sep 10 23:39:12.883593 extend-filesystems[1545]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:39:12.891479 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:39:12.895845 dbus-daemon[1505]: [system] SELinux support is enabled Sep 10 23:39:12.896041 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:39:12.901716 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:39:12.901758 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:39:12.904201 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:39:12.904639 tar[1529]: linux-arm64/LICENSE Sep 10 23:39:12.904639 tar[1529]: linux-arm64/helm Sep 10 23:39:12.904712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:39:12.904732 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:39:12.907249 systemd-logind[1520]: New seat seat0. Sep 10 23:39:12.908313 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:39:12.916262 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:39:12.920343 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:39:12.928544 update_engine[1525]: I20250910 23:39:12.920445 1525 update_check_scheduler.cc:74] Next update check in 7m16s Sep 10 23:39:12.928400 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:39:12.929312 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:39:12.929312 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:39:12.929312 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:39:12.940676 extend-filesystems[1508]: Resized filesystem in /dev/vda9 Sep 10 23:39:12.944188 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:39:12.930742 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:39:12.931493 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:39:12.937179 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:39:12.948843 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:39:13.036406 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:39:13.082659 containerd[1539]: time="2025-09-10T23:39:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:39:13.085734 containerd[1539]: time="2025-09-10T23:39:13.085687215Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:39:13.101255 containerd[1539]: time="2025-09-10T23:39:13.101161975Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.44µs" Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101419495Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101453295Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101643335Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101660215Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101684455Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101735415Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101746295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.101994815Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.102008615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.102018655Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.102026295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102287 containerd[1539]: time="2025-09-10T23:39:13.102091575Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102673 containerd[1539]: time="2025-09-10T23:39:13.102650135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102760 containerd[1539]: time="2025-09-10T23:39:13.102744415Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:39:13.102815 containerd[1539]: time="2025-09-10T23:39:13.102802535Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:39:13.102892 containerd[1539]: time="2025-09-10T23:39:13.102877575Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:39:13.103155 containerd[1539]: time="2025-09-10T23:39:13.103136135Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:39:13.103313 containerd[1539]: time="2025-09-10T23:39:13.103293815Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:39:13.107796 containerd[1539]: time="2025-09-10T23:39:13.107758495Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:39:13.107934 containerd[1539]: time="2025-09-10T23:39:13.107920335Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:39:13.108060 containerd[1539]: time="2025-09-10T23:39:13.108043495Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:39:13.108116 containerd[1539]: time="2025-09-10T23:39:13.108104895Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:39:13.108165 containerd[1539]: time="2025-09-10T23:39:13.108153375Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:39:13.108219 containerd[1539]: time="2025-09-10T23:39:13.108206655Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:39:13.108287 containerd[1539]: time="2025-09-10T23:39:13.108274695Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:39:13.108340 containerd[1539]: time="2025-09-10T23:39:13.108328215Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:39:13.108404 containerd[1539]: time="2025-09-10T23:39:13.108391335Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:39:13.108477 containerd[1539]: time="2025-09-10T23:39:13.108463255Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:39:13.108528 containerd[1539]: time="2025-09-10T23:39:13.108516215Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:39:13.108587 containerd[1539]: time="2025-09-10T23:39:13.108574495Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:39:13.108786 containerd[1539]: time="2025-09-10T23:39:13.108764655Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:39:13.108860 containerd[1539]: time="2025-09-10T23:39:13.108845295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:39:13.108925 containerd[1539]: time="2025-09-10T23:39:13.108911415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:39:13.108984 containerd[1539]: time="2025-09-10T23:39:13.108970775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:39:13.109035 containerd[1539]: time="2025-09-10T23:39:13.109023135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:39:13.109088 containerd[1539]: time="2025-09-10T23:39:13.109076135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:39:13.109143 containerd[1539]: time="2025-09-10T23:39:13.109131175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:39:13.109204 containerd[1539]: time="2025-09-10T23:39:13.109190495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:39:13.109284 containerd[1539]: time="2025-09-10T23:39:13.109271535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:39:13.109337 containerd[1539]: time="2025-09-10T23:39:13.109324775Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:39:13.109402 containerd[1539]: time="2025-09-10T23:39:13.109387015Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:39:13.109691 containerd[1539]: time="2025-09-10T23:39:13.109675455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:39:13.109768 containerd[1539]: time="2025-09-10T23:39:13.109755615Z" level=info msg="Start snapshots syncer" Sep 10 23:39:13.109844 containerd[1539]: time="2025-09-10T23:39:13.109831335Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:39:13.110352 containerd[1539]: time="2025-09-10T23:39:13.110314135Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:39:13.110528 containerd[1539]: time="2025-09-10T23:39:13.110508095Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:39:13.110660 containerd[1539]: time="2025-09-10T23:39:13.110644415Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:39:13.110871 containerd[1539]: time="2025-09-10T23:39:13.110850055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:39:13.110947 containerd[1539]: time="2025-09-10T23:39:13.110934175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:39:13.111015 containerd[1539]: time="2025-09-10T23:39:13.111001775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:39:13.111068 containerd[1539]: time="2025-09-10T23:39:13.111055935Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:39:13.111118 containerd[1539]: time="2025-09-10T23:39:13.111106495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:39:13.111169 containerd[1539]: time="2025-09-10T23:39:13.111156655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:39:13.111222 containerd[1539]: time="2025-09-10T23:39:13.111209655Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:39:13.111320 containerd[1539]: time="2025-09-10T23:39:13.111305975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:39:13.111386 containerd[1539]: time="2025-09-10T23:39:13.111362735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:39:13.111459 containerd[1539]: time="2025-09-10T23:39:13.111445215Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:39:13.111557 containerd[1539]: time="2025-09-10T23:39:13.111542575Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:39:13.111643 containerd[1539]: time="2025-09-10T23:39:13.111601775Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:39:13.111692 containerd[1539]: time="2025-09-10T23:39:13.111679975Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:39:13.111741 containerd[1539]: time="2025-09-10T23:39:13.111728815Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:39:13.111786 containerd[1539]: time="2025-09-10T23:39:13.111774495Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:39:13.111850 containerd[1539]: time="2025-09-10T23:39:13.111837335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:39:13.111901 containerd[1539]: time="2025-09-10T23:39:13.111889615Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:39:13.112017 containerd[1539]: time="2025-09-10T23:39:13.112006615Z" level=info msg="runtime interface created" Sep 10 23:39:13.112061 containerd[1539]: time="2025-09-10T23:39:13.112049815Z" level=info msg="created NRI interface" Sep 10 23:39:13.112111 containerd[1539]: time="2025-09-10T23:39:13.112099495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:39:13.112164 containerd[1539]: time="2025-09-10T23:39:13.112153295Z" level=info msg="Connect containerd service" Sep 10 23:39:13.112263 containerd[1539]: time="2025-09-10T23:39:13.112226295Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:39:13.113207 containerd[1539]: time="2025-09-10T23:39:13.113179735Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:39:13.196050 containerd[1539]: time="2025-09-10T23:39:13.195949295Z" level=info msg="Start subscribing containerd event" Sep 10 23:39:13.196050 containerd[1539]: time="2025-09-10T23:39:13.196036415Z" level=info msg="Start recovering state" Sep 10 23:39:13.196170 containerd[1539]: time="2025-09-10T23:39:13.196132735Z" level=info msg="Start event monitor" Sep 10 23:39:13.196170 containerd[1539]: time="2025-09-10T23:39:13.196146495Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:39:13.196170 containerd[1539]: time="2025-09-10T23:39:13.196156055Z" level=info msg="Start streaming server" Sep 10 23:39:13.196170 containerd[1539]: time="2025-09-10T23:39:13.196167495Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:39:13.196284 containerd[1539]: time="2025-09-10T23:39:13.196175295Z" level=info msg="runtime interface starting up..." Sep 10 23:39:13.196284 containerd[1539]: time="2025-09-10T23:39:13.196181455Z" level=info msg="starting plugins..." Sep 10 23:39:13.196284 containerd[1539]: time="2025-09-10T23:39:13.196194095Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:39:13.196565 containerd[1539]: time="2025-09-10T23:39:13.196538575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:39:13.196739 containerd[1539]: time="2025-09-10T23:39:13.196723575Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:39:13.196952 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:39:13.197271 containerd[1539]: time="2025-09-10T23:39:13.197069415Z" level=info msg="containerd successfully booted in 0.114779s" Sep 10 23:39:13.235185 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:39:13.258400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:39:13.261537 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:39:13.277448 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:39:13.277713 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:39:13.282323 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:39:13.311281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:39:13.314148 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:39:13.317541 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:39:13.320615 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:39:13.343990 tar[1529]: linux-arm64/README.md Sep 10 23:39:13.364743 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:39:14.651448 systemd-networkd[1446]: eth0: Gained IPv6LL Sep 10 23:39:14.653958 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:39:14.655628 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:39:14.660117 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:39:14.662531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:14.664755 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:39:14.695335 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:39:14.696847 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:39:14.697272 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:39:14.699563 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:39:15.320366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:15.321639 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:39:15.322752 systemd[1]: Startup finished in 2.043s (kernel) + 4.973s (initrd) + 4.373s (userspace) = 11.390s. Sep 10 23:39:15.326053 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:39:15.744663 kubelet[1636]: E0910 23:39:15.744547 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:39:15.747501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:39:15.747631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:39:15.747923 systemd[1]: kubelet.service: Consumed 764ms CPU time, 258.3M memory peak. Sep 10 23:39:19.033697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:39:19.034940 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Sep 10 23:39:19.127944 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:19.130813 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:19.146295 systemd-logind[1520]: New session 1 of user core. Sep 10 23:39:19.146790 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:39:19.148441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:39:19.172069 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:39:19.176465 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:39:19.203135 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:39:19.205538 systemd-logind[1520]: New session c1 of user core. Sep 10 23:39:19.334140 systemd[1653]: Queued start job for default target default.target. Sep 10 23:39:19.355375 systemd[1653]: Created slice app.slice - User Application Slice. Sep 10 23:39:19.355414 systemd[1653]: Reached target paths.target - Paths. Sep 10 23:39:19.355459 systemd[1653]: Reached target timers.target - Timers. Sep 10 23:39:19.356724 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:39:19.365731 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:39:19.365801 systemd[1653]: Reached target sockets.target - Sockets. Sep 10 23:39:19.365844 systemd[1653]: Reached target basic.target - Basic System. Sep 10 23:39:19.365871 systemd[1653]: Reached target default.target - Main User Target. Sep 10 23:39:19.365898 systemd[1653]: Startup finished in 154ms. Sep 10 23:39:19.366154 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:39:19.367720 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:39:19.445144 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Sep 10 23:39:19.524977 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:19.526864 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:19.538945 systemd-logind[1520]: New session 2 of user core. Sep 10 23:39:19.545474 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:39:19.598861 sshd[1666]: Connection closed by 10.0.0.1 port 48552 Sep 10 23:39:19.598807 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 10 23:39:19.621497 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:48552.service: Deactivated successfully. Sep 10 23:39:19.624464 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:39:19.625566 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:39:19.627656 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Sep 10 23:39:19.630837 systemd-logind[1520]: Removed session 2. Sep 10 23:39:19.687488 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:19.688912 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:19.693292 systemd-logind[1520]: New session 3 of user core. Sep 10 23:39:19.705569 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:39:19.757721 sshd[1675]: Connection closed by 10.0.0.1 port 48554 Sep 10 23:39:19.758348 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 10 23:39:19.779655 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:48554.service: Deactivated successfully. Sep 10 23:39:19.782947 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:39:19.785267 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:39:19.786797 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Sep 10 23:39:19.787977 systemd-logind[1520]: Removed session 3. Sep 10 23:39:19.851157 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:19.853073 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:19.857206 systemd-logind[1520]: New session 4 of user core. Sep 10 23:39:19.880462 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:39:19.936337 sshd[1683]: Connection closed by 10.0.0.1 port 48566 Sep 10 23:39:19.936839 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 10 23:39:19.952880 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:48566.service: Deactivated successfully. Sep 10 23:39:19.955687 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:39:19.956366 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:39:19.958961 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:33284.service - OpenSSH per-connection server daemon (10.0.0.1:33284). Sep 10 23:39:19.959579 systemd-logind[1520]: Removed session 4. Sep 10 23:39:20.021496 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 33284 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:20.022883 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:20.027778 systemd-logind[1520]: New session 5 of user core. Sep 10 23:39:20.035461 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:39:20.095139 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:39:20.095785 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:39:20.113017 sudo[1692]: pam_unix(sudo:session): session closed for user root Sep 10 23:39:20.115715 sshd[1691]: Connection closed by 10.0.0.1 port 33284 Sep 10 23:39:20.115459 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 10 23:39:20.130220 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:33284.service: Deactivated successfully. Sep 10 23:39:20.131777 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:39:20.132633 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:39:20.135485 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:33298.service - OpenSSH per-connection server daemon (10.0.0.1:33298). Sep 10 23:39:20.136038 systemd-logind[1520]: Removed session 5. Sep 10 23:39:20.191977 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 33298 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:20.193564 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:20.198390 systemd-logind[1520]: New session 6 of user core. Sep 10 23:39:20.208454 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:39:20.259290 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:39:20.259906 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:39:20.392344 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 10 23:39:20.397268 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:39:20.397824 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:39:20.405866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:39:20.461331 augenrules[1724]: No rules Sep 10 23:39:20.462874 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:39:20.464335 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:39:20.465707 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 10 23:39:20.467172 sshd[1700]: Connection closed by 10.0.0.1 port 33298 Sep 10 23:39:20.467559 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 10 23:39:20.490213 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:33298.service: Deactivated successfully. Sep 10 23:39:20.492216 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:39:20.493873 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:39:20.496207 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:33314.service - OpenSSH per-connection server daemon (10.0.0.1:33314). Sep 10 23:39:20.497174 systemd-logind[1520]: Removed session 6. Sep 10 23:39:20.561674 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 33314 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:39:20.563107 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:39:20.567198 systemd-logind[1520]: New session 7 of user core. Sep 10 23:39:20.584481 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:39:20.636026 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:39:20.636661 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:39:20.977116 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:39:21.008638 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:39:21.244157 dockerd[1757]: time="2025-09-10T23:39:21.244006255Z" level=info msg="Starting up" Sep 10 23:39:21.246580 dockerd[1757]: time="2025-09-10T23:39:21.246455855Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:39:21.336026 dockerd[1757]: time="2025-09-10T23:39:21.335871815Z" level=info msg="Loading containers: start." Sep 10 23:39:21.347268 kernel: Initializing XFRM netlink socket Sep 10 23:39:21.555938 systemd-networkd[1446]: docker0: Link UP Sep 10 23:39:21.561057 dockerd[1757]: time="2025-09-10T23:39:21.561008615Z" level=info msg="Loading containers: done." Sep 10 23:39:21.578012 dockerd[1757]: time="2025-09-10T23:39:21.577936215Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:39:21.578193 dockerd[1757]: time="2025-09-10T23:39:21.578043255Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:39:21.578193 dockerd[1757]: time="2025-09-10T23:39:21.578167095Z" level=info msg="Initializing buildkit" Sep 10 23:39:21.602856 dockerd[1757]: time="2025-09-10T23:39:21.602808855Z" level=info msg="Completed buildkit initialization" Sep 10 23:39:21.609100 dockerd[1757]: time="2025-09-10T23:39:21.609053895Z" level=info msg="Daemon has completed initialization" Sep 10 23:39:21.609337 dockerd[1757]: time="2025-09-10T23:39:21.609144095Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:39:21.609486 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:39:22.150137 containerd[1539]: time="2025-09-10T23:39:22.150080815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 23:39:22.724557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354333836.mount: Deactivated successfully. Sep 10 23:39:23.715811 containerd[1539]: time="2025-09-10T23:39:23.715757655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:23.719255 containerd[1539]: time="2025-09-10T23:39:23.716949095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 10 23:39:23.719255 containerd[1539]: time="2025-09-10T23:39:23.718962735Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:23.722312 containerd[1539]: time="2025-09-10T23:39:23.721824535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:23.722836 containerd[1539]: time="2025-09-10T23:39:23.722799695Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.57267044s" Sep 10 23:39:23.722884 containerd[1539]: time="2025-09-10T23:39:23.722839495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 23:39:23.724053 containerd[1539]: time="2025-09-10T23:39:23.724023135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 23:39:24.955264 containerd[1539]: time="2025-09-10T23:39:24.955021015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:24.955764 containerd[1539]: time="2025-09-10T23:39:24.955729895Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 10 23:39:24.957514 containerd[1539]: time="2025-09-10T23:39:24.957474495Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:24.960802 containerd[1539]: time="2025-09-10T23:39:24.960383775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:24.961432 containerd[1539]: time="2025-09-10T23:39:24.961397655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.23733424s" Sep 10 23:39:24.961508 containerd[1539]: time="2025-09-10T23:39:24.961435815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 23:39:24.962038 containerd[1539]: time="2025-09-10T23:39:24.962014735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 23:39:25.998064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:39:25.999506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:26.014818 containerd[1539]: time="2025-09-10T23:39:26.014764775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:26.015779 containerd[1539]: time="2025-09-10T23:39:26.015540415Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 10 23:39:26.017189 containerd[1539]: time="2025-09-10T23:39:26.017157375Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:26.020601 containerd[1539]: time="2025-09-10T23:39:26.020504775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:26.021770 containerd[1539]: time="2025-09-10T23:39:26.021443735Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.05940112s" Sep 10 23:39:26.021770 containerd[1539]: time="2025-09-10T23:39:26.021480695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 23:39:26.022443 containerd[1539]: time="2025-09-10T23:39:26.022040415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 23:39:26.146059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:26.150536 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:39:26.217339 kubelet[2042]: E0910 23:39:26.217262 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:39:26.221569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:39:26.221705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:39:26.222003 systemd[1]: kubelet.service: Consumed 160ms CPU time, 108.5M memory peak. Sep 10 23:39:27.124387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174027566.mount: Deactivated successfully. Sep 10 23:39:27.536273 containerd[1539]: time="2025-09-10T23:39:27.535509415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:27.536273 containerd[1539]: time="2025-09-10T23:39:27.536124095Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 10 23:39:27.537254 containerd[1539]: time="2025-09-10T23:39:27.537149695Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:27.540338 containerd[1539]: time="2025-09-10T23:39:27.540291095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:27.541307 containerd[1539]: time="2025-09-10T23:39:27.541268655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.51915424s" Sep 10 23:39:27.541353 containerd[1539]: time="2025-09-10T23:39:27.541306975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 23:39:27.542029 containerd[1539]: time="2025-09-10T23:39:27.542002255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 23:39:28.116750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314324713.mount: Deactivated successfully. Sep 10 23:39:28.905403 containerd[1539]: time="2025-09-10T23:39:28.905342815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:28.905871 containerd[1539]: time="2025-09-10T23:39:28.905842575Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 23:39:28.907584 containerd[1539]: time="2025-09-10T23:39:28.907540615Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:28.911280 containerd[1539]: time="2025-09-10T23:39:28.910849975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:28.911957 containerd[1539]: time="2025-09-10T23:39:28.911910655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.36987068s" Sep 10 23:39:28.911957 containerd[1539]: time="2025-09-10T23:39:28.911951215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 23:39:28.912498 containerd[1539]: time="2025-09-10T23:39:28.912408815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:39:29.583909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158833485.mount: Deactivated successfully. Sep 10 23:39:29.669513 containerd[1539]: time="2025-09-10T23:39:29.669448255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:39:29.679978 containerd[1539]: time="2025-09-10T23:39:29.679928615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:39:29.690432 containerd[1539]: time="2025-09-10T23:39:29.690371295Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:39:29.703222 containerd[1539]: time="2025-09-10T23:39:29.703149895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:39:29.704165 containerd[1539]: time="2025-09-10T23:39:29.703847495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 791.4066ms" Sep 10 23:39:29.704165 containerd[1539]: time="2025-09-10T23:39:29.703892655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:39:29.704471 containerd[1539]: time="2025-09-10T23:39:29.704346535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 23:39:30.190413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627422098.mount: Deactivated successfully. Sep 10 23:39:31.888804 containerd[1539]: time="2025-09-10T23:39:31.888747055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:31.889843 containerd[1539]: time="2025-09-10T23:39:31.889655855Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 10 23:39:31.890694 containerd[1539]: time="2025-09-10T23:39:31.890660215Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:31.894097 containerd[1539]: time="2025-09-10T23:39:31.894051015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:31.896394 containerd[1539]: time="2025-09-10T23:39:31.896122455Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.19174704s" Sep 10 23:39:31.896394 containerd[1539]: time="2025-09-10T23:39:31.896167655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 23:39:36.472112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:39:36.473587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:36.641097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:36.645180 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:39:36.676116 kubelet[2201]: E0910 23:39:36.676069 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:39:36.678730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:39:36.678866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:39:36.679137 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107M memory peak. Sep 10 23:39:36.991107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:36.991282 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107M memory peak. Sep 10 23:39:36.993410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:37.016705 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-7.scope)... Sep 10 23:39:37.016722 systemd[1]: Reloading... Sep 10 23:39:37.090845 zram_generator::config[2258]: No configuration found. Sep 10 23:39:37.164110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:39:37.253974 systemd[1]: Reloading finished in 236 ms. Sep 10 23:39:37.316705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:37.318857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:37.320568 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:39:37.321309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:37.321354 systemd[1]: kubelet.service: Consumed 99ms CPU time, 95.1M memory peak. Sep 10 23:39:37.322938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:37.469594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:37.474013 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:39:37.508133 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:39:37.508133 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:39:37.508133 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:39:37.508133 kubelet[2305]: I0910 23:39:37.507975 2305 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:39:37.789012 kubelet[2305]: I0910 23:39:37.788898 2305 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:39:37.789012 kubelet[2305]: I0910 23:39:37.788930 2305 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:39:37.789179 kubelet[2305]: I0910 23:39:37.789164 2305 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:39:37.812704 kubelet[2305]: E0910 23:39:37.812648 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 23:39:37.814624 kubelet[2305]: I0910 23:39:37.814587 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:39:37.824476 kubelet[2305]: I0910 23:39:37.824423 2305 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:39:37.827288 kubelet[2305]: I0910 23:39:37.827266 2305 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:39:37.827592 kubelet[2305]: I0910 23:39:37.827563 2305 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:39:37.827743 kubelet[2305]: I0910 23:39:37.827594 2305 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:39:37.827826 kubelet[2305]: I0910 23:39:37.827810 2305 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:39:37.827826 kubelet[2305]: I0910 23:39:37.827819 2305 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:39:37.828035 kubelet[2305]: I0910 23:39:37.828021 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:39:37.830508 kubelet[2305]: I0910 23:39:37.830482 2305 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:39:37.830508 kubelet[2305]: I0910 23:39:37.830508 2305 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:39:37.830580 kubelet[2305]: I0910 23:39:37.830537 2305 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:39:37.831863 kubelet[2305]: I0910 23:39:37.831829 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:39:37.833096 kubelet[2305]: E0910 23:39:37.833033 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:39:37.833096 kubelet[2305]: E0910 23:39:37.833051 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:39:37.833359 kubelet[2305]: I0910 23:39:37.833339 2305 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:39:37.834031 kubelet[2305]: I0910 23:39:37.834010 2305 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:39:37.834143 kubelet[2305]: W0910 23:39:37.834130 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:39:37.836818 kubelet[2305]: I0910 23:39:37.836790 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:39:37.836872 kubelet[2305]: I0910 23:39:37.836850 2305 server.go:1289] "Started kubelet" Sep 10 23:39:37.836964 kubelet[2305]: I0910 23:39:37.836920 2305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:39:37.837964 kubelet[2305]: I0910 23:39:37.837926 2305 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:39:37.839179 kubelet[2305]: I0910 23:39:37.839128 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:39:37.839451 kubelet[2305]: I0910 23:39:37.839420 2305 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:39:37.841424 kubelet[2305]: I0910 23:39:37.840832 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:39:37.841424 kubelet[2305]: I0910 23:39:37.840928 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:39:37.841424 kubelet[2305]: I0910 23:39:37.841008 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:39:37.841581 kubelet[2305]: E0910 23:39:37.841032 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:37.844099 kubelet[2305]: I0910 23:39:37.844062 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:39:37.844156 kubelet[2305]: I0910 23:39:37.844137 2305 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:39:37.845036 kubelet[2305]: I0910 23:39:37.844942 2305 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:39:37.845487 kubelet[2305]: I0910 23:39:37.845454 2305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:39:37.845807 kubelet[2305]: E0910 23:39:37.845731 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Sep 10 23:39:37.845974 kubelet[2305]: E0910 23:39:37.845953 2305 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:39:37.847276 kubelet[2305]: E0910 23:39:37.847135 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:39:37.847348 kubelet[2305]: I0910 23:39:37.847319 2305 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:39:37.849443 kubelet[2305]: E0910 23:39:37.847618 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641035cccfb05f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:39:37.836810335 +0000 UTC m=+0.359308361,LastTimestamp:2025-09-10 23:39:37.836810335 +0000 UTC m=+0.359308361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:39:37.857810 kubelet[2305]: I0910 23:39:37.857768 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:39:37.857810 kubelet[2305]: I0910 23:39:37.857793 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:39:37.857810 kubelet[2305]: I0910 23:39:37.857814 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:39:37.862392 kubelet[2305]: I0910 23:39:37.862316 2305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:39:37.863613 kubelet[2305]: I0910 23:39:37.863588 2305 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:39:37.863613 kubelet[2305]: I0910 23:39:37.863616 2305 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:39:37.863687 kubelet[2305]: I0910 23:39:37.863635 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:39:37.863687 kubelet[2305]: I0910 23:39:37.863642 2305 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:39:37.863730 kubelet[2305]: E0910 23:39:37.863686 2305 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:39:37.942173 kubelet[2305]: E0910 23:39:37.942128 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:37.964581 kubelet[2305]: E0910 23:39:37.964541 2305 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:39:38.004988 kubelet[2305]: I0910 23:39:38.004945 2305 policy_none.go:49] "None policy: Start" Sep 10 23:39:38.004988 kubelet[2305]: I0910 23:39:38.004979 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:39:38.004988 kubelet[2305]: I0910 23:39:38.004992 2305 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:39:38.005223 kubelet[2305]: E0910 23:39:38.005175 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:39:38.011819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:39:38.034008 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:39:38.043267 kubelet[2305]: E0910 23:39:38.043119 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:38.046712 kubelet[2305]: E0910 23:39:38.046680 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Sep 10 23:39:38.052826 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:39:38.054302 kubelet[2305]: E0910 23:39:38.054268 2305 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:39:38.054504 kubelet[2305]: I0910 23:39:38.054474 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:39:38.054564 kubelet[2305]: I0910 23:39:38.054495 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:39:38.054728 kubelet[2305]: I0910 23:39:38.054710 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:39:38.055854 kubelet[2305]: E0910 23:39:38.055829 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:39:38.055921 kubelet[2305]: E0910 23:39:38.055881 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:39:38.156762 kubelet[2305]: I0910 23:39:38.156715 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:39:38.157267 kubelet[2305]: E0910 23:39:38.157211 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 10 23:39:38.176491 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 10 23:39:38.190108 kubelet[2305]: E0910 23:39:38.190075 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:38.193417 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 10 23:39:38.206599 kubelet[2305]: E0910 23:39:38.206552 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:38.209079 systemd[1]: Created slice kubepods-burstable-pod0b94ff0a1054731a532001fc5d3bae98.slice - libcontainer container kubepods-burstable-pod0b94ff0a1054731a532001fc5d3bae98.slice. Sep 10 23:39:38.210679 kubelet[2305]: E0910 23:39:38.210643 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:38.245849 kubelet[2305]: I0910 23:39:38.245793 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:38.245849 kubelet[2305]: I0910 23:39:38.245839 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:38.245932 kubelet[2305]: I0910 23:39:38.245859 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:38.245932 kubelet[2305]: I0910 23:39:38.245874 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:38.245932 kubelet[2305]: I0910 23:39:38.245889 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:38.245932 kubelet[2305]: I0910 23:39:38.245902 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:38.245932 kubelet[2305]: I0910 23:39:38.245919 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:38.246032 kubelet[2305]: I0910 23:39:38.245933 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:38.246032 kubelet[2305]: I0910 23:39:38.245948 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:38.359068 kubelet[2305]: I0910 23:39:38.358944 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:39:38.359706 kubelet[2305]: E0910 23:39:38.359659 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 10 23:39:38.447395 kubelet[2305]: E0910 23:39:38.447351 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Sep 10 23:39:38.491294 containerd[1539]: time="2025-09-10T23:39:38.491220855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 10 23:39:38.507884 containerd[1539]: time="2025-09-10T23:39:38.507800975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 10 23:39:38.510565 containerd[1539]: time="2025-09-10T23:39:38.510528495Z" level=info msg="connecting to shim c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847" address="unix:///run/containerd/s/4ef2b57dd8c9a17198522a750cfd62df0e9823edb422a9beb555b6fc4976d3bd" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:39:38.511829 containerd[1539]: time="2025-09-10T23:39:38.511623855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b94ff0a1054731a532001fc5d3bae98,Namespace:kube-system,Attempt:0,}" Sep 10 23:39:38.535443 systemd[1]: Started cri-containerd-c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847.scope - libcontainer container c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847. Sep 10 23:39:38.543514 containerd[1539]: time="2025-09-10T23:39:38.543406255Z" level=info msg="connecting to shim 859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f" address="unix:///run/containerd/s/c6247381338dbb34dc1dfe50c6b35532850a9f74c16e30265e7e76d212969f87" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:39:38.546224 containerd[1539]: time="2025-09-10T23:39:38.546174175Z" level=info msg="connecting to shim f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432" address="unix:///run/containerd/s/3c5f7dcd48459d33023d7c3958fb6e2f90d855be83698857b7d0c22e9097f127" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:39:38.571825 systemd[1]: Started cri-containerd-859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f.scope - libcontainer container 859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f. Sep 10 23:39:38.578797 systemd[1]: Started cri-containerd-f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432.scope - libcontainer container f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432. Sep 10 23:39:38.584811 containerd[1539]: time="2025-09-10T23:39:38.584761695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847\"" Sep 10 23:39:38.592218 containerd[1539]: time="2025-09-10T23:39:38.591739295Z" level=info msg="CreateContainer within sandbox \"c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:39:38.601072 containerd[1539]: time="2025-09-10T23:39:38.601029815Z" level=info msg="Container 217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:39:38.614310 containerd[1539]: time="2025-09-10T23:39:38.614108615Z" level=info msg="CreateContainer within sandbox \"c9227f7ab6c77e3b5f495aa27b6ce035af161b65a6730ea59037e41e6a601847\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916\"" Sep 10 23:39:38.615583 containerd[1539]: time="2025-09-10T23:39:38.615557695Z" level=info msg="StartContainer for \"217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916\"" Sep 10 23:39:38.617474 containerd[1539]: time="2025-09-10T23:39:38.617443535Z" level=info msg="connecting to shim 217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916" address="unix:///run/containerd/s/4ef2b57dd8c9a17198522a750cfd62df0e9823edb422a9beb555b6fc4976d3bd" protocol=ttrpc version=3 Sep 10 23:39:38.621054 containerd[1539]: time="2025-09-10T23:39:38.621017295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f\"" Sep 10 23:39:38.623595 containerd[1539]: time="2025-09-10T23:39:38.623566095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b94ff0a1054731a532001fc5d3bae98,Namespace:kube-system,Attempt:0,} returns sandbox id \"f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432\"" Sep 10 23:39:38.626086 containerd[1539]: time="2025-09-10T23:39:38.625959975Z" level=info msg="CreateContainer within sandbox \"859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:39:38.629342 containerd[1539]: time="2025-09-10T23:39:38.629231055Z" level=info msg="CreateContainer within sandbox \"f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:39:38.639053 containerd[1539]: time="2025-09-10T23:39:38.639002375Z" level=info msg="Container 128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:39:38.643410 systemd[1]: Started cri-containerd-217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916.scope - libcontainer container 217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916. Sep 10 23:39:38.650338 containerd[1539]: time="2025-09-10T23:39:38.650292695Z" level=info msg="Container 219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:39:38.652125 containerd[1539]: time="2025-09-10T23:39:38.652081615Z" level=info msg="CreateContainer within sandbox \"f68ee5ef0ca0c1f3f2bff6beace5f56aa5cb982ca28a8e9b9ab883e37cf00432\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9\"" Sep 10 23:39:38.652562 containerd[1539]: time="2025-09-10T23:39:38.652533135Z" level=info msg="StartContainer for \"128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9\"" Sep 10 23:39:38.653605 containerd[1539]: time="2025-09-10T23:39:38.653573895Z" level=info msg="connecting to shim 128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9" address="unix:///run/containerd/s/3c5f7dcd48459d33023d7c3958fb6e2f90d855be83698857b7d0c22e9097f127" protocol=ttrpc version=3 Sep 10 23:39:38.663137 containerd[1539]: time="2025-09-10T23:39:38.663094895Z" level=info msg="CreateContainer within sandbox \"859bf9d6dabccb1ddd1b5fc57febbea89f6477d81801b4eb38643388548af02f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d\"" Sep 10 23:39:38.663966 containerd[1539]: time="2025-09-10T23:39:38.663931215Z" level=info msg="StartContainer for \"219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d\"" Sep 10 23:39:38.665223 containerd[1539]: time="2025-09-10T23:39:38.665173015Z" level=info msg="connecting to shim 219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d" address="unix:///run/containerd/s/c6247381338dbb34dc1dfe50c6b35532850a9f74c16e30265e7e76d212969f87" protocol=ttrpc version=3 Sep 10 23:39:38.678462 systemd[1]: Started cri-containerd-128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9.scope - libcontainer container 128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9. Sep 10 23:39:38.682923 systemd[1]: Started cri-containerd-219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d.scope - libcontainer container 219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d. Sep 10 23:39:38.701212 containerd[1539]: time="2025-09-10T23:39:38.701156775Z" level=info msg="StartContainer for \"217b82b9a29431ef1d427059cd30e6d796bcaad3874f435bcda5a95f064dc916\" returns successfully" Sep 10 23:39:38.725925 kubelet[2305]: E0910 23:39:38.725885 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:39:38.738995 containerd[1539]: time="2025-09-10T23:39:38.738786175Z" level=info msg="StartContainer for \"128b9e8f9d3eccdbccd640f87ce19d98b4940c39fed597181c805f5b956d66d9\" returns successfully" Sep 10 23:39:38.742406 containerd[1539]: time="2025-09-10T23:39:38.742365295Z" level=info msg="StartContainer for \"219a8cdce71fa0d3ec0b04a8f9f2d26856dbf3526d9a63d448ed20d46643370d\" returns successfully" Sep 10 23:39:38.761325 kubelet[2305]: I0910 23:39:38.761291 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:39:38.762724 kubelet[2305]: E0910 23:39:38.762673 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 10 23:39:38.801266 kubelet[2305]: E0910 23:39:38.801209 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:39:38.813312 kubelet[2305]: E0910 23:39:38.813264 2305 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:39:38.877853 kubelet[2305]: E0910 23:39:38.877741 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:38.880596 kubelet[2305]: E0910 23:39:38.880566 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:38.883544 kubelet[2305]: E0910 23:39:38.883396 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:39.566732 kubelet[2305]: I0910 23:39:39.566240 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:39:39.886386 kubelet[2305]: E0910 23:39:39.886293 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:39.887076 kubelet[2305]: E0910 23:39:39.886983 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:40.290328 kubelet[2305]: E0910 23:39:40.290194 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:39:40.368725 kubelet[2305]: E0910 23:39:40.368612 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18641035cccfb05f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:39:37.836810335 +0000 UTC m=+0.359308361,LastTimestamp:2025-09-10 23:39:37.836810335 +0000 UTC m=+0.359308361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:39:40.385782 kubelet[2305]: I0910 23:39:40.385740 2305 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:39:40.385782 kubelet[2305]: E0910 23:39:40.385783 2305 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 23:39:40.400401 kubelet[2305]: E0910 23:39:40.400364 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.500841 kubelet[2305]: E0910 23:39:40.500802 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.601597 kubelet[2305]: E0910 23:39:40.601466 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.702144 kubelet[2305]: E0910 23:39:40.702099 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.802880 kubelet[2305]: E0910 23:39:40.802830 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.815398 kubelet[2305]: E0910 23:39:40.815362 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:40.890919 kubelet[2305]: E0910 23:39:40.890808 2305 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:39:40.903012 kubelet[2305]: E0910 23:39:40.902967 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:40.945751 kubelet[2305]: I0910 23:39:40.945282 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:40.951000 kubelet[2305]: E0910 23:39:40.950959 2305 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:40.951000 kubelet[2305]: I0910 23:39:40.951004 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:40.954083 kubelet[2305]: E0910 23:39:40.953854 2305 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:40.954083 kubelet[2305]: I0910 23:39:40.953882 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:40.956535 kubelet[2305]: E0910 23:39:40.956483 2305 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:41.836015 kubelet[2305]: I0910 23:39:41.835750 2305 apiserver.go:52] "Watching apiserver" Sep 10 23:39:41.845230 kubelet[2305]: I0910 23:39:41.845190 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:39:42.820206 kubelet[2305]: I0910 23:39:42.820153 2305 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:42.880157 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... Sep 10 23:39:42.880490 systemd[1]: Reloading... Sep 10 23:39:42.943396 zram_generator::config[2637]: No configuration found. Sep 10 23:39:43.039466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:39:43.137815 systemd[1]: Reloading finished in 257 ms. Sep 10 23:39:43.163860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:43.181631 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:39:43.181859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:43.181913 systemd[1]: kubelet.service: Consumed 762ms CPU time, 128.8M memory peak. Sep 10 23:39:43.184413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:39:43.324496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:39:43.332594 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:39:43.379933 kubelet[2676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:39:43.379933 kubelet[2676]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:39:43.379933 kubelet[2676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:39:43.382595 kubelet[2676]: I0910 23:39:43.379978 2676 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:39:43.401516 kubelet[2676]: I0910 23:39:43.401102 2676 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:39:43.401516 kubelet[2676]: I0910 23:39:43.401135 2676 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:39:43.401669 kubelet[2676]: I0910 23:39:43.401613 2676 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:39:43.403008 kubelet[2676]: I0910 23:39:43.402972 2676 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 23:39:43.405556 kubelet[2676]: I0910 23:39:43.405403 2676 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:39:43.409379 kubelet[2676]: I0910 23:39:43.409345 2676 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:39:43.413262 kubelet[2676]: I0910 23:39:43.413221 2676 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:39:43.413465 kubelet[2676]: I0910 23:39:43.413438 2676 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:39:43.413623 kubelet[2676]: I0910 23:39:43.413467 2676 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:39:43.413698 kubelet[2676]: I0910 23:39:43.413632 2676 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:39:43.413698 kubelet[2676]: I0910 23:39:43.413641 2676 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:39:43.413698 kubelet[2676]: I0910 23:39:43.413683 2676 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:39:43.413848 kubelet[2676]: I0910 23:39:43.413836 2676 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:39:43.413876 kubelet[2676]: I0910 23:39:43.413854 2676 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:39:43.413876 kubelet[2676]: I0910 23:39:43.413875 2676 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:39:43.413912 kubelet[2676]: I0910 23:39:43.413888 2676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:39:43.415023 kubelet[2676]: I0910 23:39:43.415001 2676 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:39:43.416224 kubelet[2676]: I0910 23:39:43.415578 2676 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.419557 2676 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.419601 2676 server.go:1289] "Started kubelet" Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.419906 2676 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.419911 2676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.420133 2676 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.421079 2676 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:39:43.421922 kubelet[2676]: I0910 23:39:43.421091 2676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:39:43.425244 kubelet[2676]: I0910 23:39:43.423541 2676 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:39:43.425244 kubelet[2676]: E0910 23:39:43.424829 2676 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:39:43.425244 kubelet[2676]: I0910 23:39:43.424858 2676 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:39:43.425244 kubelet[2676]: I0910 23:39:43.425026 2676 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:39:43.425244 kubelet[2676]: I0910 23:39:43.425127 2676 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:39:43.430539 kubelet[2676]: I0910 23:39:43.430505 2676 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:39:43.436947 kubelet[2676]: I0910 23:39:43.434105 2676 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:39:43.436947 kubelet[2676]: I0910 23:39:43.434123 2676 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:39:43.436947 kubelet[2676]: E0910 23:39:43.434210 2676 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:39:43.450565 kubelet[2676]: I0910 23:39:43.450514 2676 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:39:43.451710 kubelet[2676]: I0910 23:39:43.451673 2676 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:39:43.451710 kubelet[2676]: I0910 23:39:43.451700 2676 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:39:43.451879 kubelet[2676]: I0910 23:39:43.451721 2676 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:39:43.451879 kubelet[2676]: I0910 23:39:43.451728 2676 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:39:43.451879 kubelet[2676]: E0910 23:39:43.451766 2676 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:39:43.475676 kubelet[2676]: I0910 23:39:43.475655 2676 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:39:43.475878 kubelet[2676]: I0910 23:39:43.475863 2676 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:39:43.475944 kubelet[2676]: I0910 23:39:43.475935 2676 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:39:43.476168 kubelet[2676]: I0910 23:39:43.476148 2676 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:39:43.476273 kubelet[2676]: I0910 23:39:43.476229 2676 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:39:43.476333 kubelet[2676]: I0910 23:39:43.476325 2676 policy_none.go:49] "None policy: Start" Sep 10 23:39:43.476405 kubelet[2676]: I0910 23:39:43.476396 2676 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:39:43.476475 kubelet[2676]: I0910 23:39:43.476467 2676 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:39:43.476643 kubelet[2676]: I0910 23:39:43.476629 2676 state_mem.go:75] "Updated machine memory state" Sep 10 23:39:43.481570 kubelet[2676]: E0910 23:39:43.481540 2676 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:39:43.481728 kubelet[2676]: I0910 23:39:43.481712 2676 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:39:43.481760 kubelet[2676]: I0910 23:39:43.481733 2676 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:39:43.482181 kubelet[2676]: I0910 23:39:43.482101 2676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:39:43.484914 kubelet[2676]: E0910 23:39:43.484883 2676 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:39:43.552680 kubelet[2676]: I0910 23:39:43.552641 2676 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:43.553154 kubelet[2676]: I0910 23:39:43.552821 2676 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:43.553154 kubelet[2676]: I0910 23:39:43.552732 2676 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:43.559373 kubelet[2676]: E0910 23:39:43.559341 2676 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:43.583763 kubelet[2676]: I0910 23:39:43.583737 2676 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:39:43.594802 kubelet[2676]: I0910 23:39:43.594770 2676 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:39:43.595006 kubelet[2676]: I0910 23:39:43.594996 2676 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:39:43.626554 kubelet[2676]: I0910 23:39:43.626513 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:43.626554 kubelet[2676]: I0910 23:39:43.626552 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:43.626718 kubelet[2676]: I0910 23:39:43.626577 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:43.626718 kubelet[2676]: I0910 23:39:43.626599 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:43.626718 kubelet[2676]: I0910 23:39:43.626615 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b94ff0a1054731a532001fc5d3bae98-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b94ff0a1054731a532001fc5d3bae98\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:43.626718 kubelet[2676]: I0910 23:39:43.626632 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:43.626718 kubelet[2676]: I0910 23:39:43.626646 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:43.626834 kubelet[2676]: I0910 23:39:43.626661 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:43.626834 kubelet[2676]: I0910 23:39:43.626680 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:39:44.414898 kubelet[2676]: I0910 23:39:44.414854 2676 apiserver.go:52] "Watching apiserver" Sep 10 23:39:44.427260 kubelet[2676]: I0910 23:39:44.426925 2676 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:39:44.463729 kubelet[2676]: I0910 23:39:44.463689 2676 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:44.464221 kubelet[2676]: I0910 23:39:44.464188 2676 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:44.469183 kubelet[2676]: E0910 23:39:44.468964 2676 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:39:44.469316 kubelet[2676]: E0910 23:39:44.469289 2676 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 23:39:44.489279 kubelet[2676]: I0910 23:39:44.489172 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.489156655 podStartE2EDuration="1.489156655s" podCreationTimestamp="2025-09-10 23:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:39:44.489080335 +0000 UTC m=+1.150256081" watchObservedRunningTime="2025-09-10 23:39:44.489156655 +0000 UTC m=+1.150332401" Sep 10 23:39:44.510579 kubelet[2676]: I0910 23:39:44.510496 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.510474855 podStartE2EDuration="2.510474855s" podCreationTimestamp="2025-09-10 23:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:39:44.498833135 +0000 UTC m=+1.160008881" watchObservedRunningTime="2025-09-10 23:39:44.510474855 +0000 UTC m=+1.171650601" Sep 10 23:39:44.511200 kubelet[2676]: I0910 23:39:44.511122 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.511089455 podStartE2EDuration="1.511089455s" podCreationTimestamp="2025-09-10 23:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:39:44.510870575 +0000 UTC m=+1.172046321" watchObservedRunningTime="2025-09-10 23:39:44.511089455 +0000 UTC m=+1.172265201" Sep 10 23:39:49.339898 kubelet[2676]: I0910 23:39:49.339866 2676 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:39:49.340468 kubelet[2676]: I0910 23:39:49.340390 2676 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:39:49.340521 containerd[1539]: time="2025-09-10T23:39:49.340196135Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:39:49.987998 systemd[1]: Created slice kubepods-besteffort-podbb81edb7_7b0d_4fdf_8835_6dbcf9ddc542.slice - libcontainer container kubepods-besteffort-podbb81edb7_7b0d_4fdf_8835_6dbcf9ddc542.slice. Sep 10 23:39:50.067588 kubelet[2676]: I0910 23:39:50.067551 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-kube-proxy\") pod \"kube-proxy-wb6tm\" (UID: \"bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542\") " pod="kube-system/kube-proxy-wb6tm" Sep 10 23:39:50.067759 kubelet[2676]: I0910 23:39:50.067746 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-xtables-lock\") pod \"kube-proxy-wb6tm\" (UID: \"bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542\") " pod="kube-system/kube-proxy-wb6tm" Sep 10 23:39:50.067824 kubelet[2676]: I0910 23:39:50.067814 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-lib-modules\") pod \"kube-proxy-wb6tm\" (UID: \"bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542\") " pod="kube-system/kube-proxy-wb6tm" Sep 10 23:39:50.067894 kubelet[2676]: I0910 23:39:50.067881 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwtv\" (UniqueName: \"kubernetes.io/projected/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-kube-api-access-whwtv\") pod \"kube-proxy-wb6tm\" (UID: \"bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542\") " pod="kube-system/kube-proxy-wb6tm" Sep 10 23:39:50.184183 kubelet[2676]: E0910 23:39:50.184117 2676 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 10 23:39:50.184656 kubelet[2676]: E0910 23:39:50.184355 2676 projected.go:194] Error preparing data for projected volume kube-api-access-whwtv for pod kube-system/kube-proxy-wb6tm: configmap "kube-root-ca.crt" not found Sep 10 23:39:50.184656 kubelet[2676]: E0910 23:39:50.184439 2676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-kube-api-access-whwtv podName:bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542 nodeName:}" failed. No retries permitted until 2025-09-10 23:39:50.684414757 +0000 UTC m=+7.345590503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-whwtv" (UniqueName: "kubernetes.io/projected/bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542-kube-api-access-whwtv") pod "kube-proxy-wb6tm" (UID: "bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542") : configmap "kube-root-ca.crt" not found Sep 10 23:39:50.605602 systemd[1]: Created slice kubepods-besteffort-pod086cfbac_6c0a_493c_ac34_6a1b2ea1ad83.slice - libcontainer container kubepods-besteffort-pod086cfbac_6c0a_493c_ac34_6a1b2ea1ad83.slice. Sep 10 23:39:50.671475 kubelet[2676]: I0910 23:39:50.671414 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/086cfbac-6c0a-493c-ac34-6a1b2ea1ad83-var-lib-calico\") pod \"tigera-operator-755d956888-gks4v\" (UID: \"086cfbac-6c0a-493c-ac34-6a1b2ea1ad83\") " pod="tigera-operator/tigera-operator-755d956888-gks4v" Sep 10 23:39:50.671862 kubelet[2676]: I0910 23:39:50.671495 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnfl\" (UniqueName: \"kubernetes.io/projected/086cfbac-6c0a-493c-ac34-6a1b2ea1ad83-kube-api-access-7hnfl\") pod \"tigera-operator-755d956888-gks4v\" (UID: \"086cfbac-6c0a-493c-ac34-6a1b2ea1ad83\") " pod="tigera-operator/tigera-operator-755d956888-gks4v" Sep 10 23:39:50.898680 containerd[1539]: time="2025-09-10T23:39:50.898573898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wb6tm,Uid:bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542,Namespace:kube-system,Attempt:0,}" Sep 10 23:39:50.910945 containerd[1539]: time="2025-09-10T23:39:50.910908917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gks4v,Uid:086cfbac-6c0a-493c-ac34-6a1b2ea1ad83,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:39:50.914680 containerd[1539]: time="2025-09-10T23:39:50.914627787Z" level=info msg="connecting to shim 869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365" address="unix:///run/containerd/s/61676776e9b885d8918e9ebbe348655176111881fd2c12dc5a118adc58a0e6ff" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:39:50.932716 containerd[1539]: time="2025-09-10T23:39:50.932651412Z" level=info msg="connecting to shim 97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70" address="unix:///run/containerd/s/59117ec4fd8f5bf19c43ff340a35bef6343735c2e216b1d13fd563fb41bca7bb" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:39:50.955444 systemd[1]: Started cri-containerd-869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365.scope - libcontainer container 869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365. Sep 10 23:39:50.958367 systemd[1]: Started cri-containerd-97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70.scope - libcontainer container 97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70. Sep 10 23:39:50.994257 containerd[1539]: time="2025-09-10T23:39:50.994201707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wb6tm,Uid:bb81edb7-7b0d-4fdf-8835-6dbcf9ddc542,Namespace:kube-system,Attempt:0,} returns sandbox id \"869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365\"" Sep 10 23:39:50.996958 containerd[1539]: time="2025-09-10T23:39:50.996925929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gks4v,Uid:086cfbac-6c0a-493c-ac34-6a1b2ea1ad83,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70\"" Sep 10 23:39:51.000498 containerd[1539]: time="2025-09-10T23:39:51.000463317Z" level=info msg="CreateContainer within sandbox \"869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:39:51.000925 containerd[1539]: time="2025-09-10T23:39:51.000674679Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:39:51.010544 containerd[1539]: time="2025-09-10T23:39:51.010511553Z" level=info msg="Container 1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:39:51.022544 containerd[1539]: time="2025-09-10T23:39:51.022507484Z" level=info msg="CreateContainer within sandbox \"869be2548977fbad21a4f6ab744db0e48128cacdbad684ede07053f3a9511365\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5\"" Sep 10 23:39:51.023279 containerd[1539]: time="2025-09-10T23:39:51.023249769Z" level=info msg="StartContainer for \"1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5\"" Sep 10 23:39:51.025516 containerd[1539]: time="2025-09-10T23:39:51.025490386Z" level=info msg="connecting to shim 1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5" address="unix:///run/containerd/s/61676776e9b885d8918e9ebbe348655176111881fd2c12dc5a118adc58a0e6ff" protocol=ttrpc version=3 Sep 10 23:39:51.042417 systemd[1]: Started cri-containerd-1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5.scope - libcontainer container 1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5. Sep 10 23:39:51.086543 containerd[1539]: time="2025-09-10T23:39:51.086504486Z" level=info msg="StartContainer for \"1f89a50019819609bedccd642cf4308e6818d0c32ba07fa1472a6b701e4385a5\" returns successfully" Sep 10 23:39:51.505054 kubelet[2676]: I0910 23:39:51.504986 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wb6tm" podStartSLOduration=2.50497072 podStartE2EDuration="2.50497072s" podCreationTimestamp="2025-09-10 23:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:39:51.49166422 +0000 UTC m=+8.152839966" watchObservedRunningTime="2025-09-10 23:39:51.50497072 +0000 UTC m=+8.166146466" Sep 10 23:39:52.279273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074952825.mount: Deactivated successfully. Sep 10 23:39:54.864508 containerd[1539]: time="2025-09-10T23:39:54.864463352Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:39:54.869092 containerd[1539]: time="2025-09-10T23:39:54.868746618Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:54.869774 containerd[1539]: time="2025-09-10T23:39:54.869746144Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.869042345s" Sep 10 23:39:54.869870 containerd[1539]: time="2025-09-10T23:39:54.869854745Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:39:54.869979 containerd[1539]: time="2025-09-10T23:39:54.869886145Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:54.870644 containerd[1539]: time="2025-09-10T23:39:54.870615470Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:39:54.877251 containerd[1539]: time="2025-09-10T23:39:54.877207871Z" level=info msg="CreateContainer within sandbox \"97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:39:54.885441 containerd[1539]: time="2025-09-10T23:39:54.885393082Z" level=info msg="Container 44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:39:54.892059 containerd[1539]: time="2025-09-10T23:39:54.892003923Z" level=info msg="CreateContainer within sandbox \"97cd26fbb6cb19d9e3a18e8e371d3c13a67c926af4c57427bd82d2dc88002b70\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9\"" Sep 10 23:39:54.893771 containerd[1539]: time="2025-09-10T23:39:54.892472646Z" level=info msg="StartContainer for \"44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9\"" Sep 10 23:39:54.894692 containerd[1539]: time="2025-09-10T23:39:54.894663299Z" level=info msg="connecting to shim 44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9" address="unix:///run/containerd/s/59117ec4fd8f5bf19c43ff340a35bef6343735c2e216b1d13fd563fb41bca7bb" protocol=ttrpc version=3 Sep 10 23:39:54.940458 systemd[1]: Started cri-containerd-44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9.scope - libcontainer container 44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9. Sep 10 23:39:54.968158 containerd[1539]: time="2025-09-10T23:39:54.968094515Z" level=info msg="StartContainer for \"44d3200e463e13922d6473a44c6b9574f6d9856761a252e0a5b4aa2d5be79db9\" returns successfully" Sep 10 23:39:57.935062 update_engine[1525]: I20250910 23:39:57.934981 1525 update_attempter.cc:509] Updating boot flags... Sep 10 23:40:00.427007 sudo[1736]: pam_unix(sudo:session): session closed for user root Sep 10 23:40:00.430474 sshd[1735]: Connection closed by 10.0.0.1 port 33314 Sep 10 23:40:00.431002 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:00.435339 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:33314.service: Deactivated successfully. Sep 10 23:40:00.439892 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:40:00.440372 systemd[1]: session-7.scope: Consumed 6.988s CPU time, 225.8M memory peak. Sep 10 23:40:00.441822 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:40:00.443649 systemd-logind[1520]: Removed session 7. Sep 10 23:40:06.040330 kubelet[2676]: I0910 23:40:06.040262 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-gks4v" podStartSLOduration=12.167662697 podStartE2EDuration="16.040244785s" podCreationTimestamp="2025-09-10 23:39:50 +0000 UTC" firstStartedPulling="2025-09-10 23:39:51.000184635 +0000 UTC m=+7.661360381" lastFinishedPulling="2025-09-10 23:39:54.872766723 +0000 UTC m=+11.533942469" observedRunningTime="2025-09-10 23:39:55.497107768 +0000 UTC m=+12.158283514" watchObservedRunningTime="2025-09-10 23:40:06.040244785 +0000 UTC m=+22.701420531" Sep 10 23:40:06.063470 systemd[1]: Created slice kubepods-besteffort-pod7d289990_5e23_44ae_8676_6b0489ede4e1.slice - libcontainer container kubepods-besteffort-pod7d289990_5e23_44ae_8676_6b0489ede4e1.slice. Sep 10 23:40:06.074788 kubelet[2676]: I0910 23:40:06.074729 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvz4j\" (UniqueName: \"kubernetes.io/projected/7d289990-5e23-44ae-8676-6b0489ede4e1-kube-api-access-qvz4j\") pod \"calico-typha-5747b6db78-tn2cs\" (UID: \"7d289990-5e23-44ae-8676-6b0489ede4e1\") " pod="calico-system/calico-typha-5747b6db78-tn2cs" Sep 10 23:40:06.074788 kubelet[2676]: I0910 23:40:06.074790 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d289990-5e23-44ae-8676-6b0489ede4e1-typha-certs\") pod \"calico-typha-5747b6db78-tn2cs\" (UID: \"7d289990-5e23-44ae-8676-6b0489ede4e1\") " pod="calico-system/calico-typha-5747b6db78-tn2cs" Sep 10 23:40:06.075042 kubelet[2676]: I0910 23:40:06.074820 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d289990-5e23-44ae-8676-6b0489ede4e1-tigera-ca-bundle\") pod \"calico-typha-5747b6db78-tn2cs\" (UID: \"7d289990-5e23-44ae-8676-6b0489ede4e1\") " pod="calico-system/calico-typha-5747b6db78-tn2cs" Sep 10 23:40:06.370553 containerd[1539]: time="2025-09-10T23:40:06.370432850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5747b6db78-tn2cs,Uid:7d289990-5e23-44ae-8676-6b0489ede4e1,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:06.531386 systemd[1]: Created slice kubepods-besteffort-pod9cdd8288_5d36_4b73_acc7_ae22c2c5fd93.slice - libcontainer container kubepods-besteffort-pod9cdd8288_5d36_4b73_acc7_ae22c2c5fd93.slice. Sep 10 23:40:06.533882 containerd[1539]: time="2025-09-10T23:40:06.533837198Z" level=info msg="connecting to shim 365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23" address="unix:///run/containerd/s/b5f6929ef50cc1893009f084e2a187b03f085c7ba23845c7685293199eaec8b2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:06.578105 kubelet[2676]: I0910 23:40:06.578061 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-var-run-calico\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578282 kubelet[2676]: I0910 23:40:06.578114 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-cni-log-dir\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578282 kubelet[2676]: I0910 23:40:06.578136 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-cni-net-dir\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578282 kubelet[2676]: I0910 23:40:06.578158 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-lib-modules\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578282 kubelet[2676]: I0910 23:40:06.578174 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-xtables-lock\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578282 kubelet[2676]: I0910 23:40:06.578193 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-flexvol-driver-host\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578410 kubelet[2676]: I0910 23:40:06.578215 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptxr\" (UniqueName: \"kubernetes.io/projected/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-kube-api-access-tptxr\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578410 kubelet[2676]: I0910 23:40:06.578245 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-tigera-ca-bundle\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578410 kubelet[2676]: I0910 23:40:06.578265 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-cni-bin-dir\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578410 kubelet[2676]: I0910 23:40:06.578284 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-node-certs\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578410 kubelet[2676]: I0910 23:40:06.578299 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-policysync\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.578512 kubelet[2676]: I0910 23:40:06.578314 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9cdd8288-5d36-4b73-acc7-ae22c2c5fd93-var-lib-calico\") pod \"calico-node-5whl7\" (UID: \"9cdd8288-5d36-4b73-acc7-ae22c2c5fd93\") " pod="calico-system/calico-node-5whl7" Sep 10 23:40:06.586413 systemd[1]: Started cri-containerd-365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23.scope - libcontainer container 365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23. Sep 10 23:40:06.632056 kubelet[2676]: E0910 23:40:06.631948 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwnmd" podUID="c9d769ee-2e06-45f3-90be-dacda960296b" Sep 10 23:40:06.643192 containerd[1539]: time="2025-09-10T23:40:06.643149351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5747b6db78-tn2cs,Uid:7d289990-5e23-44ae-8676-6b0489ede4e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23\"" Sep 10 23:40:06.645791 containerd[1539]: time="2025-09-10T23:40:06.645768318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:40:06.679231 kubelet[2676]: I0910 23:40:06.679177 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9d769ee-2e06-45f3-90be-dacda960296b-varrun\") pod \"csi-node-driver-gwnmd\" (UID: \"c9d769ee-2e06-45f3-90be-dacda960296b\") " pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:06.679382 kubelet[2676]: I0910 23:40:06.679268 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9d769ee-2e06-45f3-90be-dacda960296b-kubelet-dir\") pod \"csi-node-driver-gwnmd\" (UID: \"c9d769ee-2e06-45f3-90be-dacda960296b\") " pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:06.679382 kubelet[2676]: I0910 23:40:06.679287 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9d769ee-2e06-45f3-90be-dacda960296b-registration-dir\") pod \"csi-node-driver-gwnmd\" (UID: \"c9d769ee-2e06-45f3-90be-dacda960296b\") " pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:06.679382 kubelet[2676]: I0910 23:40:06.679323 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9d769ee-2e06-45f3-90be-dacda960296b-socket-dir\") pod \"csi-node-driver-gwnmd\" (UID: \"c9d769ee-2e06-45f3-90be-dacda960296b\") " pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:06.679382 kubelet[2676]: I0910 23:40:06.679339 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkcjk\" (UniqueName: \"kubernetes.io/projected/c9d769ee-2e06-45f3-90be-dacda960296b-kube-api-access-vkcjk\") pod \"csi-node-driver-gwnmd\" (UID: \"c9d769ee-2e06-45f3-90be-dacda960296b\") " pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:06.694921 kubelet[2676]: E0910 23:40:06.694893 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.694921 kubelet[2676]: W0910 23:40:06.694914 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.702558 kubelet[2676]: E0910 23:40:06.701973 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.780636 kubelet[2676]: E0910 23:40:06.780607 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.780636 kubelet[2676]: W0910 23:40:06.780632 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.781001 kubelet[2676]: E0910 23:40:06.780653 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.781001 kubelet[2676]: E0910 23:40:06.780915 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.781001 kubelet[2676]: W0910 23:40:06.780927 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.781001 kubelet[2676]: E0910 23:40:06.780938 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.781682 kubelet[2676]: E0910 23:40:06.781638 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.781987 kubelet[2676]: W0910 23:40:06.781960 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.782183 kubelet[2676]: E0910 23:40:06.782116 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.783060 kubelet[2676]: E0910 23:40:06.783016 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.783060 kubelet[2676]: W0910 23:40:06.783043 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.783546 kubelet[2676]: E0910 23:40:06.783066 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.785368 kubelet[2676]: E0910 23:40:06.785351 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.785368 kubelet[2676]: W0910 23:40:06.785366 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.785501 kubelet[2676]: E0910 23:40:06.785378 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.785586 kubelet[2676]: E0910 23:40:06.785575 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.785586 kubelet[2676]: W0910 23:40:06.785585 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.785636 kubelet[2676]: E0910 23:40:06.785593 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.785777 kubelet[2676]: E0910 23:40:06.785767 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.785815 kubelet[2676]: W0910 23:40:06.785779 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.785815 kubelet[2676]: E0910 23:40:06.785787 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.785932 kubelet[2676]: E0910 23:40:06.785922 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.785932 kubelet[2676]: W0910 23:40:06.785931 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.786086 kubelet[2676]: E0910 23:40:06.785939 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.786086 kubelet[2676]: E0910 23:40:06.786084 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.786150 kubelet[2676]: W0910 23:40:06.786091 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.786150 kubelet[2676]: E0910 23:40:06.786109 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.786273 kubelet[2676]: E0910 23:40:06.786260 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.786273 kubelet[2676]: W0910 23:40:06.786272 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.786347 kubelet[2676]: E0910 23:40:06.786280 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.786418 kubelet[2676]: E0910 23:40:06.786408 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.786418 kubelet[2676]: W0910 23:40:06.786418 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.786471 kubelet[2676]: E0910 23:40:06.786429 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.786740 kubelet[2676]: E0910 23:40:06.786668 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.786740 kubelet[2676]: W0910 23:40:06.786685 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.786740 kubelet[2676]: E0910 23:40:06.786698 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.787067 kubelet[2676]: E0910 23:40:06.786981 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.787067 kubelet[2676]: W0910 23:40:06.786994 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.787067 kubelet[2676]: E0910 23:40:06.787003 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.787359 kubelet[2676]: E0910 23:40:06.787345 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.787418 kubelet[2676]: W0910 23:40:06.787407 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.787478 kubelet[2676]: E0910 23:40:06.787467 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.787743 kubelet[2676]: E0910 23:40:06.787673 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.787743 kubelet[2676]: W0910 23:40:06.787685 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.787743 kubelet[2676]: E0910 23:40:06.787694 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.788569 kubelet[2676]: E0910 23:40:06.788546 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.788569 kubelet[2676]: W0910 23:40:06.788561 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.788569 kubelet[2676]: E0910 23:40:06.788573 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.788838 kubelet[2676]: E0910 23:40:06.788815 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.788898 kubelet[2676]: W0910 23:40:06.788880 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.788935 kubelet[2676]: E0910 23:40:06.788897 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.789139 kubelet[2676]: E0910 23:40:06.789115 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.789139 kubelet[2676]: W0910 23:40:06.789128 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.789139 kubelet[2676]: E0910 23:40:06.789137 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.789308 kubelet[2676]: E0910 23:40:06.789291 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.789308 kubelet[2676]: W0910 23:40:06.789302 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.789367 kubelet[2676]: E0910 23:40:06.789310 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789440 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.790771 kubelet[2676]: W0910 23:40:06.789450 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789458 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789582 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.790771 kubelet[2676]: W0910 23:40:06.789589 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789596 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789756 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.790771 kubelet[2676]: W0910 23:40:06.789764 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789771 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.790771 kubelet[2676]: E0910 23:40:06.789907 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.791006 kubelet[2676]: W0910 23:40:06.789914 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.791006 kubelet[2676]: E0910 23:40:06.789922 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.791006 kubelet[2676]: E0910 23:40:06.790070 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.791006 kubelet[2676]: W0910 23:40:06.790077 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.791006 kubelet[2676]: E0910 23:40:06.790084 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.791006 kubelet[2676]: E0910 23:40:06.790257 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.791006 kubelet[2676]: W0910 23:40:06.790267 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.791006 kubelet[2676]: E0910 23:40:06.790276 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.802254 kubelet[2676]: E0910 23:40:06.802208 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:06.802254 kubelet[2676]: W0910 23:40:06.802251 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:06.803352 kubelet[2676]: E0910 23:40:06.802272 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:06.838119 containerd[1539]: time="2025-09-10T23:40:06.838031389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5whl7,Uid:9cdd8288-5d36-4b73-acc7-ae22c2c5fd93,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:06.854738 containerd[1539]: time="2025-09-10T23:40:06.854699757Z" level=info msg="connecting to shim 98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2" address="unix:///run/containerd/s/938586dcc66a9067b7df220d12aee2b2e836e972b07cbc8c4f431e024745ace0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:06.881454 systemd[1]: Started cri-containerd-98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2.scope - libcontainer container 98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2. Sep 10 23:40:06.911178 containerd[1539]: time="2025-09-10T23:40:06.911050358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5whl7,Uid:9cdd8288-5d36-4b73-acc7-ae22c2c5fd93,Namespace:calico-system,Attempt:0,} returns sandbox id \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\"" Sep 10 23:40:07.705766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321392939.mount: Deactivated successfully. Sep 10 23:40:08.357031 containerd[1539]: time="2025-09-10T23:40:08.356985955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:08.357570 containerd[1539]: time="2025-09-10T23:40:08.357541796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:40:08.358399 containerd[1539]: time="2025-09-10T23:40:08.358373398Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:08.360648 containerd[1539]: time="2025-09-10T23:40:08.360617924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:08.361201 containerd[1539]: time="2025-09-10T23:40:08.361167365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.715248086s" Sep 10 23:40:08.361201 containerd[1539]: time="2025-09-10T23:40:08.361198285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:40:08.362194 containerd[1539]: time="2025-09-10T23:40:08.362166608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:40:08.372467 containerd[1539]: time="2025-09-10T23:40:08.372423794Z" level=info msg="CreateContainer within sandbox \"365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:40:08.378727 containerd[1539]: time="2025-09-10T23:40:08.378621729Z" level=info msg="Container c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:08.389287 containerd[1539]: time="2025-09-10T23:40:08.389227396Z" level=info msg="CreateContainer within sandbox \"365b5823efd6917417f4c2bb1dfaf1ebfb0fcf56f77aaf833d826414da982d23\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3\"" Sep 10 23:40:08.391280 containerd[1539]: time="2025-09-10T23:40:08.390970400Z" level=info msg="StartContainer for \"c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3\"" Sep 10 23:40:08.392920 containerd[1539]: time="2025-09-10T23:40:08.392883325Z" level=info msg="connecting to shim c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3" address="unix:///run/containerd/s/b5f6929ef50cc1893009f084e2a187b03f085c7ba23845c7685293199eaec8b2" protocol=ttrpc version=3 Sep 10 23:40:08.427424 systemd[1]: Started cri-containerd-c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3.scope - libcontainer container c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3. Sep 10 23:40:08.452717 kubelet[2676]: E0910 23:40:08.452637 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwnmd" podUID="c9d769ee-2e06-45f3-90be-dacda960296b" Sep 10 23:40:08.469039 containerd[1539]: time="2025-09-10T23:40:08.468997277Z" level=info msg="StartContainer for \"c6cb9b1d4d042a7ccdd8bdc4435247b98de56a6d52b9e02a920771da33bd72d3\" returns successfully" Sep 10 23:40:08.548569 kubelet[2676]: I0910 23:40:08.548506 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5747b6db78-tn2cs" podStartSLOduration=0.832067788 podStartE2EDuration="2.548491397s" podCreationTimestamp="2025-09-10 23:40:06 +0000 UTC" firstStartedPulling="2025-09-10 23:40:06.645552518 +0000 UTC m=+23.306728264" lastFinishedPulling="2025-09-10 23:40:08.361976127 +0000 UTC m=+25.023151873" observedRunningTime="2025-09-10 23:40:08.548172196 +0000 UTC m=+25.209347942" watchObservedRunningTime="2025-09-10 23:40:08.548491397 +0000 UTC m=+25.209667143" Sep 10 23:40:08.582814 kubelet[2676]: E0910 23:40:08.582782 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.583201 kubelet[2676]: W0910 23:40:08.582978 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.583201 kubelet[2676]: E0910 23:40:08.583008 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.583535 kubelet[2676]: E0910 23:40:08.583512 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.587266 kubelet[2676]: W0910 23:40:08.583605 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.587266 kubelet[2676]: E0910 23:40:08.587082 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.587514 kubelet[2676]: E0910 23:40:08.587497 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.587610 kubelet[2676]: W0910 23:40:08.587597 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.587778 kubelet[2676]: E0910 23:40:08.587671 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.587889 kubelet[2676]: E0910 23:40:08.587877 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.587941 kubelet[2676]: W0910 23:40:08.587931 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.588097 kubelet[2676]: E0910 23:40:08.587987 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.588202 kubelet[2676]: E0910 23:40:08.588189 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.588357 kubelet[2676]: W0910 23:40:08.588341 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.588519 kubelet[2676]: E0910 23:40:08.588422 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.588620 kubelet[2676]: E0910 23:40:08.588608 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.588676 kubelet[2676]: W0910 23:40:08.588665 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.588734 kubelet[2676]: E0910 23:40:08.588723 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.589051 kubelet[2676]: E0910 23:40:08.588933 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.589051 kubelet[2676]: W0910 23:40:08.588944 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.589051 kubelet[2676]: E0910 23:40:08.588955 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.589248 kubelet[2676]: E0910 23:40:08.589216 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.589312 kubelet[2676]: W0910 23:40:08.589301 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.589493 kubelet[2676]: E0910 23:40:08.589367 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.589862 kubelet[2676]: E0910 23:40:08.589757 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.589862 kubelet[2676]: W0910 23:40:08.589770 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.589862 kubelet[2676]: E0910 23:40:08.589781 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.590039 kubelet[2676]: E0910 23:40:08.590028 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.590107 kubelet[2676]: W0910 23:40:08.590086 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.590330 kubelet[2676]: E0910 23:40:08.590219 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.590458 kubelet[2676]: E0910 23:40:08.590446 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.590515 kubelet[2676]: W0910 23:40:08.590504 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.590663 kubelet[2676]: E0910 23:40:08.590568 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.590775 kubelet[2676]: E0910 23:40:08.590764 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.590942 kubelet[2676]: W0910 23:40:08.590836 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.590942 kubelet[2676]: E0910 23:40:08.590851 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.591593 kubelet[2676]: E0910 23:40:08.591571 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.591802 kubelet[2676]: W0910 23:40:08.591670 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.591802 kubelet[2676]: E0910 23:40:08.591691 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.591956 kubelet[2676]: E0910 23:40:08.591936 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.592016 kubelet[2676]: W0910 23:40:08.592005 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.592063 kubelet[2676]: E0910 23:40:08.592053 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.592406 kubelet[2676]: E0910 23:40:08.592313 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.592406 kubelet[2676]: W0910 23:40:08.592325 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.592406 kubelet[2676]: E0910 23:40:08.592335 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.599408 kubelet[2676]: E0910 23:40:08.599379 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.599408 kubelet[2676]: W0910 23:40:08.599402 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.599532 kubelet[2676]: E0910 23:40:08.599422 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.600415 kubelet[2676]: E0910 23:40:08.600379 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.600639 kubelet[2676]: W0910 23:40:08.600485 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.600639 kubelet[2676]: E0910 23:40:08.600545 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.600809 kubelet[2676]: E0910 23:40:08.600791 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.600836 kubelet[2676]: W0910 23:40:08.600809 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.600836 kubelet[2676]: E0910 23:40:08.600821 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.601053 kubelet[2676]: E0910 23:40:08.600987 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.601053 kubelet[2676]: W0910 23:40:08.600996 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.601053 kubelet[2676]: E0910 23:40:08.601004 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.601193 kubelet[2676]: E0910 23:40:08.601146 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.601193 kubelet[2676]: W0910 23:40:08.601154 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.601193 kubelet[2676]: E0910 23:40:08.601162 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.602344 kubelet[2676]: E0910 23:40:08.602322 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.602344 kubelet[2676]: W0910 23:40:08.602338 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.602344 kubelet[2676]: E0910 23:40:08.602351 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.602773 kubelet[2676]: E0910 23:40:08.602686 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.602773 kubelet[2676]: W0910 23:40:08.602702 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.602773 kubelet[2676]: E0910 23:40:08.602714 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.602949 kubelet[2676]: E0910 23:40:08.602934 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.602980 kubelet[2676]: W0910 23:40:08.602949 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.602980 kubelet[2676]: E0910 23:40:08.602961 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.603177 kubelet[2676]: E0910 23:40:08.603161 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.603177 kubelet[2676]: W0910 23:40:08.603176 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.603277 kubelet[2676]: E0910 23:40:08.603186 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.603420 kubelet[2676]: E0910 23:40:08.603379 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.603420 kubelet[2676]: W0910 23:40:08.603391 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.603420 kubelet[2676]: E0910 23:40:08.603400 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.607660 kubelet[2676]: E0910 23:40:08.607328 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.607660 kubelet[2676]: W0910 23:40:08.607352 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.607660 kubelet[2676]: E0910 23:40:08.607373 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609203 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610075 kubelet[2676]: W0910 23:40:08.609221 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609242 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609440 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610075 kubelet[2676]: W0910 23:40:08.609448 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609456 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609643 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610075 kubelet[2676]: W0910 23:40:08.609652 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609661 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610075 kubelet[2676]: E0910 23:40:08.609815 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610369 kubelet[2676]: W0910 23:40:08.609823 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610369 kubelet[2676]: E0910 23:40:08.609830 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610369 kubelet[2676]: E0910 23:40:08.609967 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610369 kubelet[2676]: W0910 23:40:08.609981 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610369 kubelet[2676]: E0910 23:40:08.609990 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.610369 kubelet[2676]: E0910 23:40:08.610187 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.610369 kubelet[2676]: W0910 23:40:08.610197 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.610369 kubelet[2676]: E0910 23:40:08.610208 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:08.611661 kubelet[2676]: E0910 23:40:08.611419 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:08.611661 kubelet[2676]: W0910 23:40:08.611434 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:08.611661 kubelet[2676]: E0910 23:40:08.611445 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.537698 kubelet[2676]: I0910 23:40:09.537669 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:40:09.599724 kubelet[2676]: E0910 23:40:09.599680 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.599724 kubelet[2676]: W0910 23:40:09.599708 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.599724 kubelet[2676]: E0910 23:40:09.599731 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.599985 kubelet[2676]: E0910 23:40:09.599925 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.599985 kubelet[2676]: W0910 23:40:09.599934 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.599985 kubelet[2676]: E0910 23:40:09.599944 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600132 kubelet[2676]: E0910 23:40:09.600102 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600132 kubelet[2676]: W0910 23:40:09.600112 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600132 kubelet[2676]: E0910 23:40:09.600119 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600262 kubelet[2676]: E0910 23:40:09.600250 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600262 kubelet[2676]: W0910 23:40:09.600260 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600310 kubelet[2676]: E0910 23:40:09.600268 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600440 kubelet[2676]: E0910 23:40:09.600416 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600440 kubelet[2676]: W0910 23:40:09.600428 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600440 kubelet[2676]: E0910 23:40:09.600435 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600554 kubelet[2676]: E0910 23:40:09.600543 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600554 kubelet[2676]: W0910 23:40:09.600552 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600601 kubelet[2676]: E0910 23:40:09.600560 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600687 kubelet[2676]: E0910 23:40:09.600678 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600712 kubelet[2676]: W0910 23:40:09.600690 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600712 kubelet[2676]: E0910 23:40:09.600701 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600829 kubelet[2676]: E0910 23:40:09.600820 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.600829 kubelet[2676]: W0910 23:40:09.600828 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.600872 kubelet[2676]: E0910 23:40:09.600836 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.600981 kubelet[2676]: E0910 23:40:09.600970 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601003 kubelet[2676]: W0910 23:40:09.600980 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601003 kubelet[2676]: E0910 23:40:09.600988 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601120 kubelet[2676]: E0910 23:40:09.601109 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601120 kubelet[2676]: W0910 23:40:09.601119 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601162 kubelet[2676]: E0910 23:40:09.601127 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601260 kubelet[2676]: E0910 23:40:09.601249 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601288 kubelet[2676]: W0910 23:40:09.601259 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601288 kubelet[2676]: E0910 23:40:09.601267 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601411 kubelet[2676]: E0910 23:40:09.601400 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601411 kubelet[2676]: W0910 23:40:09.601411 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601452 kubelet[2676]: E0910 23:40:09.601418 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601558 kubelet[2676]: E0910 23:40:09.601547 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601584 kubelet[2676]: W0910 23:40:09.601557 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601584 kubelet[2676]: E0910 23:40:09.601565 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601681 kubelet[2676]: E0910 23:40:09.601671 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601704 kubelet[2676]: W0910 23:40:09.601680 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601704 kubelet[2676]: E0910 23:40:09.601687 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.601811 kubelet[2676]: E0910 23:40:09.601802 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.601834 kubelet[2676]: W0910 23:40:09.601811 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.601834 kubelet[2676]: E0910 23:40:09.601818 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.610767 kubelet[2676]: E0910 23:40:09.610673 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.610767 kubelet[2676]: W0910 23:40:09.610699 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.610767 kubelet[2676]: E0910 23:40:09.610715 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.610932 kubelet[2676]: E0910 23:40:09.610909 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.610932 kubelet[2676]: W0910 23:40:09.610917 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.610932 kubelet[2676]: E0910 23:40:09.610926 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.611217 kubelet[2676]: E0910 23:40:09.611113 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.611217 kubelet[2676]: W0910 23:40:09.611157 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.611217 kubelet[2676]: E0910 23:40:09.611172 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.611684 kubelet[2676]: E0910 23:40:09.611644 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.611684 kubelet[2676]: W0910 23:40:09.611659 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.611684 kubelet[2676]: E0910 23:40:09.611670 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.612361 kubelet[2676]: E0910 23:40:09.612322 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.612361 kubelet[2676]: W0910 23:40:09.612336 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.612361 kubelet[2676]: E0910 23:40:09.612348 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.612971 kubelet[2676]: E0910 23:40:09.612919 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.613178 kubelet[2676]: W0910 23:40:09.613048 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.613178 kubelet[2676]: E0910 23:40:09.613069 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.613462 kubelet[2676]: E0910 23:40:09.613446 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.613609 kubelet[2676]: W0910 23:40:09.613522 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.613609 kubelet[2676]: E0910 23:40:09.613540 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.613892 kubelet[2676]: E0910 23:40:09.613854 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.613892 kubelet[2676]: W0910 23:40:09.613868 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.613892 kubelet[2676]: E0910 23:40:09.613880 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.614314 kubelet[2676]: E0910 23:40:09.614275 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.614314 kubelet[2676]: W0910 23:40:09.614290 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.614314 kubelet[2676]: E0910 23:40:09.614302 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.614849 kubelet[2676]: E0910 23:40:09.614835 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.614951 kubelet[2676]: W0910 23:40:09.614937 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.615068 kubelet[2676]: E0910 23:40:09.614981 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.615351 kubelet[2676]: E0910 23:40:09.615337 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.615432 kubelet[2676]: W0910 23:40:09.615418 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.615489 kubelet[2676]: E0910 23:40:09.615479 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.615804 kubelet[2676]: E0910 23:40:09.615791 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.615885 kubelet[2676]: W0910 23:40:09.615873 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.615968 kubelet[2676]: E0910 23:40:09.615956 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.616391 kubelet[2676]: E0910 23:40:09.616378 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.616467 kubelet[2676]: W0910 23:40:09.616454 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.616518 kubelet[2676]: E0910 23:40:09.616507 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.616769 kubelet[2676]: E0910 23:40:09.616721 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.616769 kubelet[2676]: W0910 23:40:09.616733 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.616769 kubelet[2676]: E0910 23:40:09.616743 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.617095 kubelet[2676]: E0910 23:40:09.617052 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.617095 kubelet[2676]: W0910 23:40:09.617064 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.617095 kubelet[2676]: E0910 23:40:09.617075 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.617460 kubelet[2676]: E0910 23:40:09.617446 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.617613 kubelet[2676]: W0910 23:40:09.617497 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.617613 kubelet[2676]: E0910 23:40:09.617510 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.617915 kubelet[2676]: E0910 23:40:09.617901 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.617994 kubelet[2676]: W0910 23:40:09.617981 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.618065 kubelet[2676]: E0910 23:40:09.618041 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.618703 kubelet[2676]: E0910 23:40:09.618632 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:40:09.618703 kubelet[2676]: W0910 23:40:09.618662 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:40:09.618703 kubelet[2676]: E0910 23:40:09.618674 2676 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:40:09.706812 containerd[1539]: time="2025-09-10T23:40:09.706755520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:09.708152 containerd[1539]: time="2025-09-10T23:40:09.708120403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:40:09.708778 containerd[1539]: time="2025-09-10T23:40:09.708744965Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:09.710835 containerd[1539]: time="2025-09-10T23:40:09.710795849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:09.713098 containerd[1539]: time="2025-09-10T23:40:09.713046295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.350839207s" Sep 10 23:40:09.713098 containerd[1539]: time="2025-09-10T23:40:09.713099095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:40:09.716970 containerd[1539]: time="2025-09-10T23:40:09.716934224Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:40:09.727754 containerd[1539]: time="2025-09-10T23:40:09.725405644Z" level=info msg="Container db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:09.736727 containerd[1539]: time="2025-09-10T23:40:09.736606390Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\"" Sep 10 23:40:09.737671 containerd[1539]: time="2025-09-10T23:40:09.737645833Z" level=info msg="StartContainer for \"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\"" Sep 10 23:40:09.739469 containerd[1539]: time="2025-09-10T23:40:09.739440837Z" level=info msg="connecting to shim db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963" address="unix:///run/containerd/s/938586dcc66a9067b7df220d12aee2b2e836e972b07cbc8c4f431e024745ace0" protocol=ttrpc version=3 Sep 10 23:40:09.764446 systemd[1]: Started cri-containerd-db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963.scope - libcontainer container db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963. Sep 10 23:40:09.800591 containerd[1539]: time="2025-09-10T23:40:09.800479981Z" level=info msg="StartContainer for \"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\" returns successfully" Sep 10 23:40:09.814380 systemd[1]: cri-containerd-db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963.scope: Deactivated successfully. Sep 10 23:40:09.853387 containerd[1539]: time="2025-09-10T23:40:09.853339106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\" id:\"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\" pid:3366 exited_at:{seconds:1757547609 nanos:830979813}" Sep 10 23:40:09.853387 containerd[1539]: time="2025-09-10T23:40:09.853346666Z" level=info msg="received exit event container_id:\"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\" id:\"db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963\" pid:3366 exited_at:{seconds:1757547609 nanos:830979813}" Sep 10 23:40:09.888599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db1316eec6d10b23928514cfcc2c0ea6e354c4460cfb4c64432af5a955fd9963-rootfs.mount: Deactivated successfully. Sep 10 23:40:10.452405 kubelet[2676]: E0910 23:40:10.452357 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwnmd" podUID="c9d769ee-2e06-45f3-90be-dacda960296b" Sep 10 23:40:10.546635 containerd[1539]: time="2025-09-10T23:40:10.546593421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:40:12.453323 kubelet[2676]: E0910 23:40:12.452954 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwnmd" podUID="c9d769ee-2e06-45f3-90be-dacda960296b" Sep 10 23:40:12.591908 containerd[1539]: time="2025-09-10T23:40:12.591870647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:12.592706 containerd[1539]: time="2025-09-10T23:40:12.592334608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:40:12.593606 containerd[1539]: time="2025-09-10T23:40:12.593571730Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:12.595296 containerd[1539]: time="2025-09-10T23:40:12.595252174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:12.595762 containerd[1539]: time="2025-09-10T23:40:12.595731014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.049100433s" Sep 10 23:40:12.595762 containerd[1539]: time="2025-09-10T23:40:12.595761215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:40:12.600628 containerd[1539]: time="2025-09-10T23:40:12.600598944Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:40:12.618862 containerd[1539]: time="2025-09-10T23:40:12.618171618Z" level=info msg="Container af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:12.620916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720074676.mount: Deactivated successfully. Sep 10 23:40:12.627033 containerd[1539]: time="2025-09-10T23:40:12.626984915Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\"" Sep 10 23:40:12.628574 containerd[1539]: time="2025-09-10T23:40:12.627855157Z" level=info msg="StartContainer for \"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\"" Sep 10 23:40:12.630884 containerd[1539]: time="2025-09-10T23:40:12.630832603Z" level=info msg="connecting to shim af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e" address="unix:///run/containerd/s/938586dcc66a9067b7df220d12aee2b2e836e972b07cbc8c4f431e024745ace0" protocol=ttrpc version=3 Sep 10 23:40:12.659427 systemd[1]: Started cri-containerd-af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e.scope - libcontainer container af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e. Sep 10 23:40:12.755111 containerd[1539]: time="2025-09-10T23:40:12.754991964Z" level=info msg="StartContainer for \"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\" returns successfully" Sep 10 23:40:13.257322 systemd[1]: cri-containerd-af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e.scope: Deactivated successfully. Sep 10 23:40:13.257595 systemd[1]: cri-containerd-af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e.scope: Consumed 457ms CPU time, 177.3M memory peak, 564K read from disk, 165.8M written to disk. Sep 10 23:40:13.259368 containerd[1539]: time="2025-09-10T23:40:13.259273353Z" level=info msg="received exit event container_id:\"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\" id:\"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\" pid:3426 exited_at:{seconds:1757547613 nanos:258978152}" Sep 10 23:40:13.259680 containerd[1539]: time="2025-09-10T23:40:13.259651153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\" id:\"af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e\" pid:3426 exited_at:{seconds:1757547613 nanos:258978152}" Sep 10 23:40:13.279910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af64c08c5bb7a3e2a86058e3889855bb05c659acd58fd183be574e2d571c932e-rootfs.mount: Deactivated successfully. Sep 10 23:40:13.340263 kubelet[2676]: I0910 23:40:13.340208 2676 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:40:13.388705 systemd[1]: Created slice kubepods-burstable-pod29ede58b_d519_414e_9f19_adc281fcdff3.slice - libcontainer container kubepods-burstable-pod29ede58b_d519_414e_9f19_adc281fcdff3.slice. Sep 10 23:40:13.409758 systemd[1]: Created slice kubepods-besteffort-pod838ea765_8eb0_4ca7_a81a_ef19e1d4710d.slice - libcontainer container kubepods-besteffort-pod838ea765_8eb0_4ca7_a81a_ef19e1d4710d.slice. Sep 10 23:40:13.428019 systemd[1]: Created slice kubepods-besteffort-podff3cf021_c1d9_42c3_9a3e_0e7f431c4377.slice - libcontainer container kubepods-besteffort-podff3cf021_c1d9_42c3_9a3e_0e7f431c4377.slice. Sep 10 23:40:13.439550 kubelet[2676]: I0910 23:40:13.439012 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed49b779-ae26-4f21-8c71-bfb295a2839f-calico-apiserver-certs\") pod \"calico-apiserver-6ffb66c4b7-vqb6s\" (UID: \"ed49b779-ae26-4f21-8c71-bfb295a2839f\") " pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" Sep 10 23:40:13.439550 kubelet[2676]: I0910 23:40:13.439078 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmxf\" (UniqueName: \"kubernetes.io/projected/ff3cf021-c1d9-42c3-9a3e-0e7f431c4377-kube-api-access-4nmxf\") pod \"calico-kube-controllers-55974d75bc-nmr7w\" (UID: \"ff3cf021-c1d9-42c3-9a3e-0e7f431c4377\") " pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" Sep 10 23:40:13.439550 kubelet[2676]: I0910 23:40:13.439114 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6857a3bd-8f37-44e6-89e8-658596ad93bf-config\") pod \"goldmane-54d579b49d-psdhk\" (UID: \"6857a3bd-8f37-44e6-89e8-658596ad93bf\") " pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.439550 kubelet[2676]: I0910 23:40:13.439134 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6857a3bd-8f37-44e6-89e8-658596ad93bf-goldmane-key-pair\") pod \"goldmane-54d579b49d-psdhk\" (UID: \"6857a3bd-8f37-44e6-89e8-658596ad93bf\") " pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.439550 kubelet[2676]: I0910 23:40:13.439148 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52njb\" (UniqueName: \"kubernetes.io/projected/6857a3bd-8f37-44e6-89e8-658596ad93bf-kube-api-access-52njb\") pod \"goldmane-54d579b49d-psdhk\" (UID: \"6857a3bd-8f37-44e6-89e8-658596ad93bf\") " pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.439846 kubelet[2676]: I0910 23:40:13.439166 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/838ea765-8eb0-4ca7-a81a-ef19e1d4710d-calico-apiserver-certs\") pod \"calico-apiserver-6ffb66c4b7-6b52v\" (UID: \"838ea765-8eb0-4ca7-a81a-ef19e1d4710d\") " pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" Sep 10 23:40:13.439846 kubelet[2676]: I0910 23:40:13.439194 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzv96\" (UniqueName: \"kubernetes.io/projected/838ea765-8eb0-4ca7-a81a-ef19e1d4710d-kube-api-access-wzv96\") pod \"calico-apiserver-6ffb66c4b7-6b52v\" (UID: \"838ea765-8eb0-4ca7-a81a-ef19e1d4710d\") " pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" Sep 10 23:40:13.439846 kubelet[2676]: I0910 23:40:13.439213 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndsw\" (UniqueName: \"kubernetes.io/projected/ed49b779-ae26-4f21-8c71-bfb295a2839f-kube-api-access-8ndsw\") pod \"calico-apiserver-6ffb66c4b7-vqb6s\" (UID: \"ed49b779-ae26-4f21-8c71-bfb295a2839f\") " pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" Sep 10 23:40:13.439846 kubelet[2676]: I0910 23:40:13.439230 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff3cf021-c1d9-42c3-9a3e-0e7f431c4377-tigera-ca-bundle\") pod \"calico-kube-controllers-55974d75bc-nmr7w\" (UID: \"ff3cf021-c1d9-42c3-9a3e-0e7f431c4377\") " pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" Sep 10 23:40:13.439846 kubelet[2676]: I0910 23:40:13.439264 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6857a3bd-8f37-44e6-89e8-658596ad93bf-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-psdhk\" (UID: \"6857a3bd-8f37-44e6-89e8-658596ad93bf\") " pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.443604 systemd[1]: Created slice kubepods-besteffort-podff611b41_4845_4dc6_a595_b55759475ea0.slice - libcontainer container kubepods-besteffort-podff611b41_4845_4dc6_a595_b55759475ea0.slice. Sep 10 23:40:13.450319 systemd[1]: Created slice kubepods-burstable-pod372d1c07_db82_4444_a884_054aa487c98c.slice - libcontainer container kubepods-burstable-pod372d1c07_db82_4444_a884_054aa487c98c.slice. Sep 10 23:40:13.456680 systemd[1]: Created slice kubepods-besteffort-poded49b779_ae26_4f21_8c71_bfb295a2839f.slice - libcontainer container kubepods-besteffort-poded49b779_ae26_4f21_8c71_bfb295a2839f.slice. Sep 10 23:40:13.461849 systemd[1]: Created slice kubepods-besteffort-pod6857a3bd_8f37_44e6_89e8_658596ad93bf.slice - libcontainer container kubepods-besteffort-pod6857a3bd_8f37_44e6_89e8_658596ad93bf.slice. Sep 10 23:40:13.540588 kubelet[2676]: I0910 23:40:13.540549 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372d1c07-db82-4444-a884-054aa487c98c-config-volume\") pod \"coredns-674b8bbfcf-85tgh\" (UID: \"372d1c07-db82-4444-a884-054aa487c98c\") " pod="kube-system/coredns-674b8bbfcf-85tgh" Sep 10 23:40:13.540931 kubelet[2676]: I0910 23:40:13.540621 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ks8\" (UniqueName: \"kubernetes.io/projected/372d1c07-db82-4444-a884-054aa487c98c-kube-api-access-n7ks8\") pod \"coredns-674b8bbfcf-85tgh\" (UID: \"372d1c07-db82-4444-a884-054aa487c98c\") " pod="kube-system/coredns-674b8bbfcf-85tgh" Sep 10 23:40:13.540931 kubelet[2676]: I0910 23:40:13.540645 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldft\" (UniqueName: \"kubernetes.io/projected/ff611b41-4845-4dc6-a595-b55759475ea0-kube-api-access-xldft\") pod \"whisker-66dcb46778-nngft\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " pod="calico-system/whisker-66dcb46778-nngft" Sep 10 23:40:13.540931 kubelet[2676]: I0910 23:40:13.540673 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ede58b-d519-414e-9f19-adc281fcdff3-config-volume\") pod \"coredns-674b8bbfcf-hft6g\" (UID: \"29ede58b-d519-414e-9f19-adc281fcdff3\") " pod="kube-system/coredns-674b8bbfcf-hft6g" Sep 10 23:40:13.540931 kubelet[2676]: I0910 23:40:13.540742 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvph\" (UniqueName: \"kubernetes.io/projected/29ede58b-d519-414e-9f19-adc281fcdff3-kube-api-access-5tvph\") pod \"coredns-674b8bbfcf-hft6g\" (UID: \"29ede58b-d519-414e-9f19-adc281fcdff3\") " pod="kube-system/coredns-674b8bbfcf-hft6g" Sep 10 23:40:13.540931 kubelet[2676]: I0910 23:40:13.540759 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-backend-key-pair\") pod \"whisker-66dcb46778-nngft\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " pod="calico-system/whisker-66dcb46778-nngft" Sep 10 23:40:13.541145 kubelet[2676]: I0910 23:40:13.540789 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-ca-bundle\") pod \"whisker-66dcb46778-nngft\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " pod="calico-system/whisker-66dcb46778-nngft" Sep 10 23:40:13.561670 containerd[1539]: time="2025-09-10T23:40:13.561631104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:40:13.694637 containerd[1539]: time="2025-09-10T23:40:13.694597186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hft6g,Uid:29ede58b-d519-414e-9f19-adc281fcdff3,Namespace:kube-system,Attempt:0,}" Sep 10 23:40:13.718448 containerd[1539]: time="2025-09-10T23:40:13.718411989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-6b52v,Uid:838ea765-8eb0-4ca7-a81a-ef19e1d4710d,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:40:13.734499 containerd[1539]: time="2025-09-10T23:40:13.734428819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55974d75bc-nmr7w,Uid:ff3cf021-c1d9-42c3-9a3e-0e7f431c4377,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:13.748909 containerd[1539]: time="2025-09-10T23:40:13.748861605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dcb46778-nngft,Uid:ff611b41-4845-4dc6-a595-b55759475ea0,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:13.754570 containerd[1539]: time="2025-09-10T23:40:13.754518735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85tgh,Uid:372d1c07-db82-4444-a884-054aa487c98c,Namespace:kube-system,Attempt:0,}" Sep 10 23:40:13.760247 containerd[1539]: time="2025-09-10T23:40:13.760209185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-vqb6s,Uid:ed49b779-ae26-4f21-8c71-bfb295a2839f,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:40:13.765931 containerd[1539]: time="2025-09-10T23:40:13.765886156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-psdhk,Uid:6857a3bd-8f37-44e6-89e8-658596ad93bf,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:13.834559 containerd[1539]: time="2025-09-10T23:40:13.834406201Z" level=error msg="Failed to destroy network for sandbox \"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.836047 containerd[1539]: time="2025-09-10T23:40:13.835828603Z" level=error msg="Failed to destroy network for sandbox \"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.836976 containerd[1539]: time="2025-09-10T23:40:13.836927205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55974d75bc-nmr7w,Uid:ff3cf021-c1d9-42c3-9a3e-0e7f431c4377,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.838004 containerd[1539]: time="2025-09-10T23:40:13.837964567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-6b52v,Uid:838ea765-8eb0-4ca7-a81a-ef19e1d4710d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.840617 kubelet[2676]: E0910 23:40:13.840455 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.840617 kubelet[2676]: E0910 23:40:13.840544 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" Sep 10 23:40:13.840781 kubelet[2676]: E0910 23:40:13.840750 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.841260 kubelet[2676]: E0910 23:40:13.840805 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" Sep 10 23:40:13.841380 kubelet[2676]: E0910 23:40:13.841353 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" Sep 10 23:40:13.841656 kubelet[2676]: E0910 23:40:13.841434 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffb66c4b7-6b52v_calico-apiserver(838ea765-8eb0-4ca7-a81a-ef19e1d4710d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffb66c4b7-6b52v_calico-apiserver(838ea765-8eb0-4ca7-a81a-ef19e1d4710d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"682e76df4e6778f07d5efde97002dfbc2f544f0b682ebbf1eb9bf094d477dcc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" podUID="838ea765-8eb0-4ca7-a81a-ef19e1d4710d" Sep 10 23:40:13.842202 kubelet[2676]: E0910 23:40:13.842164 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" Sep 10 23:40:13.842554 kubelet[2676]: E0910 23:40:13.842529 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55974d75bc-nmr7w_calico-system(ff3cf021-c1d9-42c3-9a3e-0e7f431c4377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55974d75bc-nmr7w_calico-system(ff3cf021-c1d9-42c3-9a3e-0e7f431c4377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ffe0d44565d7c48a6a3d93ca4ae8528ee415edbe52cdfa817a985efe31d70fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" podUID="ff3cf021-c1d9-42c3-9a3e-0e7f431c4377" Sep 10 23:40:13.853176 containerd[1539]: time="2025-09-10T23:40:13.853123435Z" level=error msg="Failed to destroy network for sandbox \"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.854892 containerd[1539]: time="2025-09-10T23:40:13.854781718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-psdhk,Uid:6857a3bd-8f37-44e6-89e8-658596ad93bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.855949 containerd[1539]: time="2025-09-10T23:40:13.855312599Z" level=error msg="Failed to destroy network for sandbox \"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.856295 containerd[1539]: time="2025-09-10T23:40:13.856227560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66dcb46778-nngft,Uid:ff611b41-4845-4dc6-a595-b55759475ea0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.856634 kubelet[2676]: E0910 23:40:13.856589 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.856699 kubelet[2676]: E0910 23:40:13.856649 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66dcb46778-nngft" Sep 10 23:40:13.856699 kubelet[2676]: E0910 23:40:13.856670 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66dcb46778-nngft" Sep 10 23:40:13.856749 kubelet[2676]: E0910 23:40:13.856729 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66dcb46778-nngft_calico-system(ff611b41-4845-4dc6-a595-b55759475ea0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66dcb46778-nngft_calico-system(ff611b41-4845-4dc6-a595-b55759475ea0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5517d76d27d89325f655b769657172be4bc69328b330d9853c5ea48d5a8c48a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66dcb46778-nngft" podUID="ff611b41-4845-4dc6-a595-b55759475ea0" Sep 10 23:40:13.857943 kubelet[2676]: E0910 23:40:13.855193 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.858091 kubelet[2676]: E0910 23:40:13.858057 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.858230 kubelet[2676]: E0910 23:40:13.858144 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-psdhk" Sep 10 23:40:13.858230 kubelet[2676]: E0910 23:40:13.858195 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-psdhk_calico-system(6857a3bd-8f37-44e6-89e8-658596ad93bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-psdhk_calico-system(6857a3bd-8f37-44e6-89e8-658596ad93bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"509d23aec837e0059405e9666dfa11417f7fd71f1e7afe1a3a2aa6a820d32d4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-psdhk" podUID="6857a3bd-8f37-44e6-89e8-658596ad93bf" Sep 10 23:40:13.868405 containerd[1539]: time="2025-09-10T23:40:13.868311502Z" level=error msg="Failed to destroy network for sandbox \"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.869380 containerd[1539]: time="2025-09-10T23:40:13.869344064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hft6g,Uid:29ede58b-d519-414e-9f19-adc281fcdff3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.869605 kubelet[2676]: E0910 23:40:13.869553 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.869651 kubelet[2676]: E0910 23:40:13.869611 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hft6g" Sep 10 23:40:13.869651 kubelet[2676]: E0910 23:40:13.869631 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hft6g" Sep 10 23:40:13.869801 kubelet[2676]: E0910 23:40:13.869681 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hft6g_kube-system(29ede58b-d519-414e-9f19-adc281fcdff3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hft6g_kube-system(29ede58b-d519-414e-9f19-adc281fcdff3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"198184c238596f6b950c45a95c859ab87a6017a76374651cde4c6dada6bd602c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hft6g" podUID="29ede58b-d519-414e-9f19-adc281fcdff3" Sep 10 23:40:13.890323 containerd[1539]: time="2025-09-10T23:40:13.889995542Z" level=error msg="Failed to destroy network for sandbox \"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.893318 containerd[1539]: time="2025-09-10T23:40:13.892580667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85tgh,Uid:372d1c07-db82-4444-a884-054aa487c98c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.894152 kubelet[2676]: E0910 23:40:13.894120 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.894212 kubelet[2676]: E0910 23:40:13.894174 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-85tgh" Sep 10 23:40:13.894212 kubelet[2676]: E0910 23:40:13.894194 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-85tgh" Sep 10 23:40:13.894310 containerd[1539]: time="2025-09-10T23:40:13.893020947Z" level=error msg="Failed to destroy network for sandbox \"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.894340 kubelet[2676]: E0910 23:40:13.894310 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-85tgh_kube-system(372d1c07-db82-4444-a884-054aa487c98c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-85tgh_kube-system(372d1c07-db82-4444-a884-054aa487c98c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19c15445898df416929781adecc0a962a2ff532a2d70c6f4d05109e24d7547ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-85tgh" podUID="372d1c07-db82-4444-a884-054aa487c98c" Sep 10 23:40:13.895717 containerd[1539]: time="2025-09-10T23:40:13.895670952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-vqb6s,Uid:ed49b779-ae26-4f21-8c71-bfb295a2839f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.895903 kubelet[2676]: E0910 23:40:13.895856 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:13.896000 kubelet[2676]: E0910 23:40:13.895907 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" Sep 10 23:40:13.896000 kubelet[2676]: E0910 23:40:13.895925 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" Sep 10 23:40:13.896000 kubelet[2676]: E0910 23:40:13.895976 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ffb66c4b7-vqb6s_calico-apiserver(ed49b779-ae26-4f21-8c71-bfb295a2839f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ffb66c4b7-vqb6s_calico-apiserver(ed49b779-ae26-4f21-8c71-bfb295a2839f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cb37a3596e7184e1bbf65b7122670b4445dba12a2b95ce0700a36ac758455ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" podUID="ed49b779-ae26-4f21-8c71-bfb295a2839f" Sep 10 23:40:14.458151 systemd[1]: Created slice kubepods-besteffort-podc9d769ee_2e06_45f3_90be_dacda960296b.slice - libcontainer container kubepods-besteffort-podc9d769ee_2e06_45f3_90be_dacda960296b.slice. Sep 10 23:40:14.463415 containerd[1539]: time="2025-09-10T23:40:14.461187050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwnmd,Uid:c9d769ee-2e06-45f3-90be-dacda960296b,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:14.522664 containerd[1539]: time="2025-09-10T23:40:14.522126994Z" level=error msg="Failed to destroy network for sandbox \"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:14.527360 containerd[1539]: time="2025-09-10T23:40:14.527321083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwnmd,Uid:c9d769ee-2e06-45f3-90be-dacda960296b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:14.530192 kubelet[2676]: E0910 23:40:14.528457 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:40:14.530192 kubelet[2676]: E0910 23:40:14.528519 2676 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:14.530192 kubelet[2676]: E0910 23:40:14.528539 2676 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwnmd" Sep 10 23:40:14.530380 kubelet[2676]: E0910 23:40:14.528585 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gwnmd_calico-system(c9d769ee-2e06-45f3-90be-dacda960296b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gwnmd_calico-system(c9d769ee-2e06-45f3-90be-dacda960296b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eddd99e2515c02e8335029ac5c509c311264eeae6b08da08c11760e9e41b4cc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gwnmd" podUID="c9d769ee-2e06-45f3-90be-dacda960296b" Sep 10 23:40:14.617392 systemd[1]: run-netns-cni\x2d850327b5\x2da2f7\x2d5682\x2d58e4\x2d47727933e816.mount: Deactivated successfully. Sep 10 23:40:17.577032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346148510.mount: Deactivated successfully. Sep 10 23:40:17.862255 containerd[1539]: time="2025-09-10T23:40:17.861859407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:40:17.866582 containerd[1539]: time="2025-09-10T23:40:17.866522614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:17.888258 containerd[1539]: time="2025-09-10T23:40:17.888099804Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:17.889687 containerd[1539]: time="2025-09-10T23:40:17.888730725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.327057741s" Sep 10 23:40:17.889687 containerd[1539]: time="2025-09-10T23:40:17.888767845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:40:17.889687 containerd[1539]: time="2025-09-10T23:40:17.889137885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:17.904628 containerd[1539]: time="2025-09-10T23:40:17.904592507Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:40:17.933351 containerd[1539]: time="2025-09-10T23:40:17.931093344Z" level=info msg="Container 97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:17.945951 containerd[1539]: time="2025-09-10T23:40:17.945821125Z" level=info msg="CreateContainer within sandbox \"98cebe6118f76e1eb67258ceac2a6f6c8ed359491062082b8c99c0746c58afe2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\"" Sep 10 23:40:17.947360 containerd[1539]: time="2025-09-10T23:40:17.946740847Z" level=info msg="StartContainer for \"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\"" Sep 10 23:40:17.949793 containerd[1539]: time="2025-09-10T23:40:17.949761611Z" level=info msg="connecting to shim 97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6" address="unix:///run/containerd/s/938586dcc66a9067b7df220d12aee2b2e836e972b07cbc8c4f431e024745ace0" protocol=ttrpc version=3 Sep 10 23:40:17.972476 systemd[1]: Started cri-containerd-97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6.scope - libcontainer container 97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6. Sep 10 23:40:18.035979 containerd[1539]: time="2025-09-10T23:40:18.035870969Z" level=info msg="StartContainer for \"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\" returns successfully" Sep 10 23:40:18.163675 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:40:18.163776 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:40:18.475056 kubelet[2676]: I0910 23:40:18.474374 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xldft\" (UniqueName: \"kubernetes.io/projected/ff611b41-4845-4dc6-a595-b55759475ea0-kube-api-access-xldft\") pod \"ff611b41-4845-4dc6-a595-b55759475ea0\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " Sep 10 23:40:18.475807 kubelet[2676]: I0910 23:40:18.475508 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-ca-bundle\") pod \"ff611b41-4845-4dc6-a595-b55759475ea0\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " Sep 10 23:40:18.475807 kubelet[2676]: I0910 23:40:18.475542 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-backend-key-pair\") pod \"ff611b41-4845-4dc6-a595-b55759475ea0\" (UID: \"ff611b41-4845-4dc6-a595-b55759475ea0\") " Sep 10 23:40:18.491432 kubelet[2676]: I0910 23:40:18.491385 2676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ff611b41-4845-4dc6-a595-b55759475ea0" (UID: "ff611b41-4845-4dc6-a595-b55759475ea0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:40:18.492006 kubelet[2676]: I0910 23:40:18.491531 2676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff611b41-4845-4dc6-a595-b55759475ea0-kube-api-access-xldft" (OuterVolumeSpecName: "kube-api-access-xldft") pod "ff611b41-4845-4dc6-a595-b55759475ea0" (UID: "ff611b41-4845-4dc6-a595-b55759475ea0"). InnerVolumeSpecName "kube-api-access-xldft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:40:18.495333 kubelet[2676]: I0910 23:40:18.495300 2676 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ff611b41-4845-4dc6-a595-b55759475ea0" (UID: "ff611b41-4845-4dc6-a595-b55759475ea0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:40:18.577791 systemd[1]: var-lib-kubelet-pods-ff611b41\x2d4845\x2d4dc6\x2da595\x2db55759475ea0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxldft.mount: Deactivated successfully. Sep 10 23:40:18.580329 systemd[1]: var-lib-kubelet-pods-ff611b41\x2d4845\x2d4dc6\x2da595\x2db55759475ea0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:40:18.580598 kubelet[2676]: I0910 23:40:18.580569 2676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xldft\" (UniqueName: \"kubernetes.io/projected/ff611b41-4845-4dc6-a595-b55759475ea0-kube-api-access-xldft\") on node \"localhost\" DevicePath \"\"" Sep 10 23:40:18.580598 kubelet[2676]: I0910 23:40:18.580594 2676 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 23:40:18.580672 kubelet[2676]: I0910 23:40:18.580604 2676 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff611b41-4845-4dc6-a595-b55759475ea0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 23:40:18.590320 systemd[1]: Removed slice kubepods-besteffort-podff611b41_4845_4dc6_a595_b55759475ea0.slice - libcontainer container kubepods-besteffort-podff611b41_4845_4dc6_a595_b55759475ea0.slice. Sep 10 23:40:18.626653 kubelet[2676]: I0910 23:40:18.626578 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5whl7" podStartSLOduration=1.650254587 podStartE2EDuration="12.626561148s" podCreationTimestamp="2025-09-10 23:40:06 +0000 UTC" firstStartedPulling="2025-09-10 23:40:06.913441445 +0000 UTC m=+23.574617191" lastFinishedPulling="2025-09-10 23:40:17.889748046 +0000 UTC m=+34.550923752" observedRunningTime="2025-09-10 23:40:18.613272611 +0000 UTC m=+35.274448357" watchObservedRunningTime="2025-09-10 23:40:18.626561148 +0000 UTC m=+35.287736894" Sep 10 23:40:18.668093 systemd[1]: Created slice kubepods-besteffort-podc549619d_fc5f_4d12_aa02_b8edcec6339c.slice - libcontainer container kubepods-besteffort-podc549619d_fc5f_4d12_aa02_b8edcec6339c.slice. Sep 10 23:40:18.782252 kubelet[2676]: I0910 23:40:18.782190 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c549619d-fc5f-4d12-aa02-b8edcec6339c-whisker-backend-key-pair\") pod \"whisker-7944f9cffc-8mcp6\" (UID: \"c549619d-fc5f-4d12-aa02-b8edcec6339c\") " pod="calico-system/whisker-7944f9cffc-8mcp6" Sep 10 23:40:18.782252 kubelet[2676]: I0910 23:40:18.782263 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c549619d-fc5f-4d12-aa02-b8edcec6339c-whisker-ca-bundle\") pod \"whisker-7944f9cffc-8mcp6\" (UID: \"c549619d-fc5f-4d12-aa02-b8edcec6339c\") " pod="calico-system/whisker-7944f9cffc-8mcp6" Sep 10 23:40:18.782952 kubelet[2676]: I0910 23:40:18.782925 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhx9b\" (UniqueName: \"kubernetes.io/projected/c549619d-fc5f-4d12-aa02-b8edcec6339c-kube-api-access-zhx9b\") pod \"whisker-7944f9cffc-8mcp6\" (UID: \"c549619d-fc5f-4d12-aa02-b8edcec6339c\") " pod="calico-system/whisker-7944f9cffc-8mcp6" Sep 10 23:40:18.977591 containerd[1539]: time="2025-09-10T23:40:18.977546532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7944f9cffc-8mcp6,Uid:c549619d-fc5f-4d12-aa02-b8edcec6339c,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:19.166795 systemd-networkd[1446]: cali5c6fdc460a8: Link UP Sep 10 23:40:19.167685 systemd-networkd[1446]: cali5c6fdc460a8: Gained carrier Sep 10 23:40:19.184021 containerd[1539]: 2025-09-10 23:40:19.001 [INFO][3807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:19.184021 containerd[1539]: 2025-09-10 23:40:19.033 [INFO][3807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7944f9cffc--8mcp6-eth0 whisker-7944f9cffc- calico-system c549619d-fc5f-4d12-aa02-b8edcec6339c 904 0 2025-09-10 23:40:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7944f9cffc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7944f9cffc-8mcp6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5c6fdc460a8 [] [] }} ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-" Sep 10 23:40:19.184021 containerd[1539]: 2025-09-10 23:40:19.033 [INFO][3807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184021 containerd[1539]: 2025-09-10 23:40:19.117 [INFO][3821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" HandleID="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Workload="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.117 [INFO][3821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" HandleID="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Workload="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7944f9cffc-8mcp6", "timestamp":"2025-09-10 23:40:19.117075906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.117 [INFO][3821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.117 [INFO][3821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.117 [INFO][3821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.129 [INFO][3821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" host="localhost" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.134 [INFO][3821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.139 [INFO][3821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.141 [INFO][3821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.143 [INFO][3821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:19.184284 containerd[1539]: 2025-09-10 23:40:19.144 [INFO][3821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" host="localhost" Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.145 [INFO][3821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.149 [INFO][3821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" host="localhost" Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.155 [INFO][3821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" host="localhost" Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.155 [INFO][3821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" host="localhost" Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.155 [INFO][3821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:19.184525 containerd[1539]: 2025-09-10 23:40:19.155 [INFO][3821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" HandleID="k8s-pod-network.f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Workload="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184662 containerd[1539]: 2025-09-10 23:40:19.157 [INFO][3807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7944f9cffc--8mcp6-eth0", GenerateName:"whisker-7944f9cffc-", Namespace:"calico-system", SelfLink:"", UID:"c549619d-fc5f-4d12-aa02-b8edcec6339c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7944f9cffc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7944f9cffc-8mcp6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c6fdc460a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:19.184662 containerd[1539]: 2025-09-10 23:40:19.158 [INFO][3807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184748 containerd[1539]: 2025-09-10 23:40:19.158 [INFO][3807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c6fdc460a8 ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184748 containerd[1539]: 2025-09-10 23:40:19.167 [INFO][3807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.184796 containerd[1539]: 2025-09-10 23:40:19.168 [INFO][3807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7944f9cffc--8mcp6-eth0", GenerateName:"whisker-7944f9cffc-", Namespace:"calico-system", SelfLink:"", UID:"c549619d-fc5f-4d12-aa02-b8edcec6339c", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7944f9cffc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f", Pod:"whisker-7944f9cffc-8mcp6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c6fdc460a8", MAC:"ca:15:78:79:03:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:19.184847 containerd[1539]: 2025-09-10 23:40:19.181 [INFO][3807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" Namespace="calico-system" Pod="whisker-7944f9cffc-8mcp6" WorkloadEndpoint="localhost-k8s-whisker--7944f9cffc--8mcp6-eth0" Sep 10 23:40:19.283466 containerd[1539]: time="2025-09-10T23:40:19.283411112Z" level=info msg="connecting to shim f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f" address="unix:///run/containerd/s/5b8ee6bcec3eecceecfd66c6979e1571a83a8c0b0cdca2076793916c4816bb88" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:19.310398 systemd[1]: Started cri-containerd-f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f.scope - libcontainer container f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f. Sep 10 23:40:19.328656 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:19.346942 containerd[1539]: time="2025-09-10T23:40:19.346904110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7944f9cffc-8mcp6,Uid:c549619d-fc5f-4d12-aa02-b8edcec6339c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f\"" Sep 10 23:40:19.369674 containerd[1539]: time="2025-09-10T23:40:19.369446818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:40:19.458142 kubelet[2676]: I0910 23:40:19.457515 2676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff611b41-4845-4dc6-a595-b55759475ea0" path="/var/lib/kubelet/pods/ff611b41-4845-4dc6-a595-b55759475ea0/volumes" Sep 10 23:40:19.589589 kubelet[2676]: I0910 23:40:19.589213 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:40:20.251367 systemd-networkd[1446]: cali5c6fdc460a8: Gained IPv6LL Sep 10 23:40:20.375015 containerd[1539]: time="2025-09-10T23:40:20.374970233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:20.375904 containerd[1539]: time="2025-09-10T23:40:20.375768194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:40:20.376693 containerd[1539]: time="2025-09-10T23:40:20.376661635Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:20.379194 containerd[1539]: time="2025-09-10T23:40:20.379140718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:20.379799 containerd[1539]: time="2025-09-10T23:40:20.379755199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.010270741s" Sep 10 23:40:20.379799 containerd[1539]: time="2025-09-10T23:40:20.379788199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:40:20.383689 containerd[1539]: time="2025-09-10T23:40:20.383647243Z" level=info msg="CreateContainer within sandbox \"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:40:20.397276 containerd[1539]: time="2025-09-10T23:40:20.396864139Z" level=info msg="Container e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:20.405076 containerd[1539]: time="2025-09-10T23:40:20.405040108Z" level=info msg="CreateContainer within sandbox \"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5\"" Sep 10 23:40:20.405711 containerd[1539]: time="2025-09-10T23:40:20.405633389Z" level=info msg="StartContainer for \"e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5\"" Sep 10 23:40:20.406818 containerd[1539]: time="2025-09-10T23:40:20.406787510Z" level=info msg="connecting to shim e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5" address="unix:///run/containerd/s/5b8ee6bcec3eecceecfd66c6979e1571a83a8c0b0cdca2076793916c4816bb88" protocol=ttrpc version=3 Sep 10 23:40:20.431430 systemd[1]: Started cri-containerd-e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5.scope - libcontainer container e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5. Sep 10 23:40:20.464064 containerd[1539]: time="2025-09-10T23:40:20.464004217Z" level=info msg="StartContainer for \"e97788cbe075b07320b2b85d0c2aa89c2fcc0f8123424bc0ec894043dffe3ee5\" returns successfully" Sep 10 23:40:20.466051 containerd[1539]: time="2025-09-10T23:40:20.466013259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:40:22.158772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870048015.mount: Deactivated successfully. Sep 10 23:40:22.180654 containerd[1539]: time="2025-09-10T23:40:22.180602310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:22.181032 containerd[1539]: time="2025-09-10T23:40:22.180995670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:40:22.182005 containerd[1539]: time="2025-09-10T23:40:22.181972511Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:22.184635 containerd[1539]: time="2025-09-10T23:40:22.184570474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:22.185258 containerd[1539]: time="2025-09-10T23:40:22.185137114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.719077255s" Sep 10 23:40:22.185258 containerd[1539]: time="2025-09-10T23:40:22.185173074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:40:22.189745 containerd[1539]: time="2025-09-10T23:40:22.189671999Z" level=info msg="CreateContainer within sandbox \"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:40:22.198500 containerd[1539]: time="2025-09-10T23:40:22.198449408Z" level=info msg="Container c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:22.208172 containerd[1539]: time="2025-09-10T23:40:22.208121618Z" level=info msg="CreateContainer within sandbox \"f6410a8d5ecece61d1bc4a21dba81698e1f4e2ce2ad50ee5ad632ca40cd50f2f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91\"" Sep 10 23:40:22.208747 containerd[1539]: time="2025-09-10T23:40:22.208655338Z" level=info msg="StartContainer for \"c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91\"" Sep 10 23:40:22.210085 containerd[1539]: time="2025-09-10T23:40:22.210045180Z" level=info msg="connecting to shim c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91" address="unix:///run/containerd/s/5b8ee6bcec3eecceecfd66c6979e1571a83a8c0b0cdca2076793916c4816bb88" protocol=ttrpc version=3 Sep 10 23:40:22.234437 systemd[1]: Started cri-containerd-c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91.scope - libcontainer container c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91. Sep 10 23:40:22.269356 containerd[1539]: time="2025-09-10T23:40:22.269312480Z" level=info msg="StartContainer for \"c305dbc30f9f30bd2b84973b969849e14892d6656f2d95b6d953848fe066cd91\" returns successfully" Sep 10 23:40:22.621756 kubelet[2676]: I0910 23:40:22.621680 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7944f9cffc-8mcp6" podStartSLOduration=1.8046264619999999 podStartE2EDuration="4.621564559s" podCreationTimestamp="2025-09-10 23:40:18 +0000 UTC" firstStartedPulling="2025-09-10 23:40:19.369147578 +0000 UTC m=+36.030323324" lastFinishedPulling="2025-09-10 23:40:22.186085675 +0000 UTC m=+38.847261421" observedRunningTime="2025-09-10 23:40:22.620898199 +0000 UTC m=+39.282073905" watchObservedRunningTime="2025-09-10 23:40:22.621564559 +0000 UTC m=+39.282740305" Sep 10 23:40:24.067297 kubelet[2676]: I0910 23:40:24.066806 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:40:24.202455 containerd[1539]: time="2025-09-10T23:40:24.202411922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\" id:\"f12e9475b55eb02cfc1797da14f6679a513a2caeb39522381b9e299c13b12829\" pid:4178 exit_status:1 exited_at:{seconds:1757547624 nanos:201919642}" Sep 10 23:40:24.282101 containerd[1539]: time="2025-09-10T23:40:24.282047273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\" id:\"abfe73702276bfec48f92659856054e15bb3fd5cf9e25cf816f9521b4fda7ba1\" pid:4203 exit_status:1 exited_at:{seconds:1757547624 nanos:281719993}" Sep 10 23:40:24.453670 containerd[1539]: time="2025-09-10T23:40:24.453552147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hft6g,Uid:29ede58b-d519-414e-9f19-adc281fcdff3,Namespace:kube-system,Attempt:0,}" Sep 10 23:40:24.587786 systemd-networkd[1446]: cali937154ccae3: Link UP Sep 10 23:40:24.588284 systemd-networkd[1446]: cali937154ccae3: Gained carrier Sep 10 23:40:24.627811 containerd[1539]: 2025-09-10 23:40:24.477 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:24.627811 containerd[1539]: 2025-09-10 23:40:24.493 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hft6g-eth0 coredns-674b8bbfcf- kube-system 29ede58b-d519-414e-9f19-adc281fcdff3 839 0 2025-09-10 23:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hft6g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali937154ccae3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-" Sep 10 23:40:24.627811 containerd[1539]: 2025-09-10 23:40:24.493 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.627811 containerd[1539]: 2025-09-10 23:40:24.524 [INFO][4231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" HandleID="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Workload="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.525 [INFO][4231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" HandleID="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Workload="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hft6g", "timestamp":"2025-09-10 23:40:24.524874651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.525 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.525 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.525 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.538 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" host="localhost" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.546 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.554 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.558 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.562 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:24.628074 containerd[1539]: 2025-09-10 23:40:24.562 [INFO][4231] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" host="localhost" Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.564 [INFO][4231] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4 Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.570 [INFO][4231] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" host="localhost" Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.582 [INFO][4231] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" host="localhost" Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.582 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" host="localhost" Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.582 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:24.628845 containerd[1539]: 2025-09-10 23:40:24.582 [INFO][4231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" HandleID="k8s-pod-network.09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Workload="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.629015 containerd[1539]: 2025-09-10 23:40:24.584 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hft6g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29ede58b-d519-414e-9f19-adc281fcdff3", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hft6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali937154ccae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:24.629173 containerd[1539]: 2025-09-10 23:40:24.584 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.629173 containerd[1539]: 2025-09-10 23:40:24.584 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali937154ccae3 ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.629173 containerd[1539]: 2025-09-10 23:40:24.588 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.629464 containerd[1539]: 2025-09-10 23:40:24.590 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hft6g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29ede58b-d519-414e-9f19-adc281fcdff3", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4", Pod:"coredns-674b8bbfcf-hft6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali937154ccae3", MAC:"0e:32:9c:54:61:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:24.629464 containerd[1539]: 2025-09-10 23:40:24.622 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-hft6g" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hft6g-eth0" Sep 10 23:40:24.774378 containerd[1539]: time="2025-09-10T23:40:24.773859674Z" level=info msg="connecting to shim 09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4" address="unix:///run/containerd/s/f358066e9d308110ed4a798e290292046dce0b7db8258bb7adfc77cf77922c0f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:24.804417 systemd[1]: Started cri-containerd-09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4.scope - libcontainer container 09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4. Sep 10 23:40:24.816895 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:24.855696 containerd[1539]: time="2025-09-10T23:40:24.855614387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hft6g,Uid:29ede58b-d519-414e-9f19-adc281fcdff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4\"" Sep 10 23:40:24.860560 containerd[1539]: time="2025-09-10T23:40:24.860524152Z" level=info msg="CreateContainer within sandbox \"09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:40:24.875570 containerd[1539]: time="2025-09-10T23:40:24.875495765Z" level=info msg="Container 14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:24.877202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095209137.mount: Deactivated successfully. Sep 10 23:40:24.889148 containerd[1539]: time="2025-09-10T23:40:24.889085657Z" level=info msg="CreateContainer within sandbox \"09fb0b25bb0de3a266cff260bab77a5ad942c05210850e1ba1cc9735936690f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8\"" Sep 10 23:40:24.889686 containerd[1539]: time="2025-09-10T23:40:24.889635898Z" level=info msg="StartContainer for \"14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8\"" Sep 10 23:40:24.890520 containerd[1539]: time="2025-09-10T23:40:24.890497459Z" level=info msg="connecting to shim 14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8" address="unix:///run/containerd/s/f358066e9d308110ed4a798e290292046dce0b7db8258bb7adfc77cf77922c0f" protocol=ttrpc version=3 Sep 10 23:40:24.919406 systemd[1]: Started cri-containerd-14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8.scope - libcontainer container 14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8. Sep 10 23:40:24.964116 containerd[1539]: time="2025-09-10T23:40:24.964068564Z" level=info msg="StartContainer for \"14856ec2e012cda6622482e8d5d4b80043ae3a77f51008b30ccf4c8ef4c2b9a8\" returns successfully" Sep 10 23:40:25.452972 containerd[1539]: time="2025-09-10T23:40:25.452924737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85tgh,Uid:372d1c07-db82-4444-a884-054aa487c98c,Namespace:kube-system,Attempt:0,}" Sep 10 23:40:25.453495 containerd[1539]: time="2025-09-10T23:40:25.453307817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-vqb6s,Uid:ed49b779-ae26-4f21-8c71-bfb295a2839f,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:40:25.453804 containerd[1539]: time="2025-09-10T23:40:25.453772618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-psdhk,Uid:6857a3bd-8f37-44e6-89e8-658596ad93bf,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:25.599297 systemd-networkd[1446]: cali2907cb2f8ca: Link UP Sep 10 23:40:25.599894 systemd-networkd[1446]: cali2907cb2f8ca: Gained carrier Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.502 [INFO][4352] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.520 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--85tgh-eth0 coredns-674b8bbfcf- kube-system 372d1c07-db82-4444-a884-054aa487c98c 844 0 2025-09-10 23:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-85tgh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2907cb2f8ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.520 [INFO][4352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.549 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" HandleID="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Workload="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.550 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" HandleID="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Workload="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-85tgh", "timestamp":"2025-09-10 23:40:25.549899899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.550 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.551 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.551 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.562 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.567 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.573 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.575 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.578 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.578 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.580 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.584 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.590 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.590 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" host="localhost" Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.590 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:25.622410 containerd[1539]: 2025-09-10 23:40:25.590 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" HandleID="k8s-pod-network.b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Workload="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.594 [INFO][4352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--85tgh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"372d1c07-db82-4444-a884-054aa487c98c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-85tgh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2907cb2f8ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.594 [INFO][4352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.596 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2907cb2f8ca ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.599 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.600 [INFO][4352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--85tgh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"372d1c07-db82-4444-a884-054aa487c98c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c", Pod:"coredns-674b8bbfcf-85tgh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2907cb2f8ca", MAC:"0a:6b:cf:93:14:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.623006 containerd[1539]: 2025-09-10 23:40:25.615 [INFO][4352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-85tgh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--85tgh-eth0" Sep 10 23:40:25.638185 kubelet[2676]: I0910 23:40:25.638088 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hft6g" podStartSLOduration=35.638069173 podStartE2EDuration="35.638069173s" podCreationTimestamp="2025-09-10 23:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:40:25.637886412 +0000 UTC m=+42.299062158" watchObservedRunningTime="2025-09-10 23:40:25.638069173 +0000 UTC m=+42.299244919" Sep 10 23:40:25.691168 containerd[1539]: time="2025-09-10T23:40:25.691095497Z" level=info msg="connecting to shim b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c" address="unix:///run/containerd/s/65257c36762654c0c237cf1e35c44ba3c8ab94ad346a2aeca72159ed23fb55ac" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:25.720956 systemd-networkd[1446]: calice5857fe1ed: Link UP Sep 10 23:40:25.723159 systemd-networkd[1446]: calice5857fe1ed: Gained carrier Sep 10 23:40:25.731505 systemd[1]: Started cri-containerd-b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c.scope - libcontainer container b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c. Sep 10 23:40:25.743598 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.493 [INFO][4373] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.514 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--psdhk-eth0 goldmane-54d579b49d- calico-system 6857a3bd-8f37-44e6-89e8-658596ad93bf 843 0 2025-09-10 23:40:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-psdhk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calice5857fe1ed [] [] }} ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.515 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.554 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" HandleID="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Workload="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.554 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" HandleID="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Workload="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-psdhk", "timestamp":"2025-09-10 23:40:25.554223302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.554 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.591 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.591 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.665 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.679 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.690 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.692 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.695 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.695 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.698 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.704 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" host="localhost" Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:25.754332 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" HandleID="k8s-pod-network.91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Workload="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.714 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--psdhk-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6857a3bd-8f37-44e6-89e8-658596ad93bf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-psdhk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice5857fe1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.714 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.714 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice5857fe1ed ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.724 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.725 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--psdhk-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6857a3bd-8f37-44e6-89e8-658596ad93bf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e", Pod:"goldmane-54d579b49d-psdhk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calice5857fe1ed", MAC:"0e:37:77:e2:25:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.755015 containerd[1539]: 2025-09-10 23:40:25.752 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" Namespace="calico-system" Pod="goldmane-54d579b49d-psdhk" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--psdhk-eth0" Sep 10 23:40:25.784400 containerd[1539]: time="2025-09-10T23:40:25.784346095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-85tgh,Uid:372d1c07-db82-4444-a884-054aa487c98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c\"" Sep 10 23:40:25.791765 containerd[1539]: time="2025-09-10T23:40:25.791716262Z" level=info msg="CreateContainer within sandbox \"b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:40:25.805627 containerd[1539]: time="2025-09-10T23:40:25.805574153Z" level=info msg="Container 8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:25.821495 containerd[1539]: time="2025-09-10T23:40:25.814962601Z" level=info msg="CreateContainer within sandbox \"b6b4dd15e59c6ef55518fb1330946309981ab80f569ca9c3405f0d1ea06e6e2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005\"" Sep 10 23:40:25.821495 containerd[1539]: time="2025-09-10T23:40:25.816387042Z" level=info msg="StartContainer for \"8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005\"" Sep 10 23:40:25.821495 containerd[1539]: time="2025-09-10T23:40:25.817565563Z" level=info msg="connecting to shim 91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e" address="unix:///run/containerd/s/f871d3f9a79cbd928b6957420fe3d0890144b50e036cd42192c22abdcfcf3ce0" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:25.821495 containerd[1539]: time="2025-09-10T23:40:25.820053725Z" level=info msg="connecting to shim 8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005" address="unix:///run/containerd/s/65257c36762654c0c237cf1e35c44ba3c8ab94ad346a2aeca72159ed23fb55ac" protocol=ttrpc version=3 Sep 10 23:40:25.834526 systemd-networkd[1446]: calie0a53a8acb3: Link UP Sep 10 23:40:25.834741 systemd-networkd[1446]: calie0a53a8acb3: Gained carrier Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.492 [INFO][4358] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.509 [INFO][4358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0 calico-apiserver-6ffb66c4b7- calico-apiserver ed49b779-ae26-4f21-8c71-bfb295a2839f 845 0 2025-09-10 23:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffb66c4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ffb66c4b7-vqb6s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0a53a8acb3 [] [] }} ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.510 [INFO][4358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.554 [INFO][4392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" HandleID="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.555 [INFO][4392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" HandleID="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000512f80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ffb66c4b7-vqb6s", "timestamp":"2025-09-10 23:40:25.553449502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.555 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.712 [INFO][4392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.763 [INFO][4392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.776 [INFO][4392] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.789 [INFO][4392] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.792 [INFO][4392] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.797 [INFO][4392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.797 [INFO][4392] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.800 [INFO][4392] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012 Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.807 [INFO][4392] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.817 [INFO][4392] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.817 [INFO][4392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" host="localhost" Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.818 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:25.850039 containerd[1539]: 2025-09-10 23:40:25.818 [INFO][4392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" HandleID="k8s-pod-network.af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.830 [INFO][4358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0", GenerateName:"calico-apiserver-6ffb66c4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed49b779-ae26-4f21-8c71-bfb295a2839f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffb66c4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ffb66c4b7-vqb6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0a53a8acb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.830 [INFO][4358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.830 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0a53a8acb3 ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.834 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.835 [INFO][4358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0", GenerateName:"calico-apiserver-6ffb66c4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed49b779-ae26-4f21-8c71-bfb295a2839f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffb66c4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012", Pod:"calico-apiserver-6ffb66c4b7-vqb6s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0a53a8acb3", MAC:"5e:e5:33:26:01:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:25.850570 containerd[1539]: 2025-09-10 23:40:25.846 [INFO][4358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-vqb6s" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--vqb6s-eth0" Sep 10 23:40:25.853418 systemd[1]: Started cri-containerd-8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005.scope - libcontainer container 8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005. Sep 10 23:40:25.855519 systemd[1]: Started cri-containerd-91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e.scope - libcontainer container 91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e. Sep 10 23:40:25.876216 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:25.884653 containerd[1539]: time="2025-09-10T23:40:25.884597020Z" level=info msg="connecting to shim af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012" address="unix:///run/containerd/s/fa968413aa8e6371ce88ff7e7f35dff73c659086ba6bdbd357409b6168da1897" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:25.893282 containerd[1539]: time="2025-09-10T23:40:25.893097867Z" level=info msg="StartContainer for \"8cf6ecee6f93a74d5c802320ca6414b2ea7715dfd6ed3b4c20be5bfca1007005\" returns successfully" Sep 10 23:40:25.913338 containerd[1539]: time="2025-09-10T23:40:25.913295884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-psdhk,Uid:6857a3bd-8f37-44e6-89e8-658596ad93bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e\"" Sep 10 23:40:25.915792 containerd[1539]: time="2025-09-10T23:40:25.915371886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:40:25.917455 systemd[1]: Started cri-containerd-af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012.scope - libcontainer container af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012. Sep 10 23:40:25.938786 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:25.963579 containerd[1539]: time="2025-09-10T23:40:25.963532486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-vqb6s,Uid:ed49b779-ae26-4f21-8c71-bfb295a2839f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012\"" Sep 10 23:40:26.139396 systemd-networkd[1446]: cali937154ccae3: Gained IPv6LL Sep 10 23:40:26.463005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1360643911.mount: Deactivated successfully. Sep 10 23:40:26.639840 kubelet[2676]: I0910 23:40:26.639778 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-85tgh" podStartSLOduration=36.63976094 podStartE2EDuration="36.63976094s" podCreationTimestamp="2025-09-10 23:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:40:26.63912694 +0000 UTC m=+43.300302686" watchObservedRunningTime="2025-09-10 23:40:26.63976094 +0000 UTC m=+43.300936686" Sep 10 23:40:26.716360 systemd-networkd[1446]: cali2907cb2f8ca: Gained IPv6LL Sep 10 23:40:27.356344 systemd-networkd[1446]: calice5857fe1ed: Gained IPv6LL Sep 10 23:40:27.453582 containerd[1539]: time="2025-09-10T23:40:27.453544719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55974d75bc-nmr7w,Uid:ff3cf021-c1d9-42c3-9a3e-0e7f431c4377,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:27.548350 systemd-networkd[1446]: calie0a53a8acb3: Gained IPv6LL Sep 10 23:40:27.648764 systemd-networkd[1446]: cali966d08a892f: Link UP Sep 10 23:40:27.649184 systemd-networkd[1446]: cali966d08a892f: Gained carrier Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.489 [INFO][4679] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.509 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0 calico-kube-controllers-55974d75bc- calico-system ff3cf021-c1d9-42c3-9a3e-0e7f431c4377 841 0 2025-09-10 23:40:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55974d75bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55974d75bc-nmr7w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali966d08a892f [] [] }} ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.509 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.560 [INFO][4693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" HandleID="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Workload="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.560 [INFO][4693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" HandleID="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Workload="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012ee30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55974d75bc-nmr7w", "timestamp":"2025-09-10 23:40:27.560671198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.560 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.561 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.561 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.593 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.611 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.618 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.621 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.624 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.624 [INFO][4693] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.626 [INFO][4693] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.632 [INFO][4693] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.641 [INFO][4693] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.641 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" host="localhost" Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.641 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:27.665083 containerd[1539]: 2025-09-10 23:40:27.641 [INFO][4693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" HandleID="k8s-pod-network.200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Workload="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.644 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0", GenerateName:"calico-kube-controllers-55974d75bc-", Namespace:"calico-system", SelfLink:"", UID:"ff3cf021-c1d9-42c3-9a3e-0e7f431c4377", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55974d75bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55974d75bc-nmr7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d08a892f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.644 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.644 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali966d08a892f ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.650 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.650 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0", GenerateName:"calico-kube-controllers-55974d75bc-", Namespace:"calico-system", SelfLink:"", UID:"ff3cf021-c1d9-42c3-9a3e-0e7f431c4377", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55974d75bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb", Pod:"calico-kube-controllers-55974d75bc-nmr7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d08a892f", MAC:"ce:d9:b9:2f:98:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:27.665819 containerd[1539]: 2025-09-10 23:40:27.660 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" Namespace="calico-system" Pod="calico-kube-controllers-55974d75bc-nmr7w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55974d75bc--nmr7w-eth0" Sep 10 23:40:27.680307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657675781.mount: Deactivated successfully. Sep 10 23:40:27.744605 containerd[1539]: time="2025-09-10T23:40:27.744544734Z" level=info msg="connecting to shim 200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb" address="unix:///run/containerd/s/a9baf22803f1377ac6981e01d7cbc3d772bf2e2e52fb2fef452ec19be8a5e68b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:27.780458 systemd[1]: Started cri-containerd-200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb.scope - libcontainer container 200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb. Sep 10 23:40:27.793757 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:27.816867 containerd[1539]: time="2025-09-10T23:40:27.816800347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55974d75bc-nmr7w,Uid:ff3cf021-c1d9-42c3-9a3e-0e7f431c4377,Namespace:calico-system,Attempt:0,} returns sandbox id \"200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb\"" Sep 10 23:40:28.135970 containerd[1539]: time="2025-09-10T23:40:28.135449656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:28.136493 containerd[1539]: time="2025-09-10T23:40:28.136228057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:40:28.137144 containerd[1539]: time="2025-09-10T23:40:28.137092377Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:28.139275 containerd[1539]: time="2025-09-10T23:40:28.139193539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:28.139989 containerd[1539]: time="2025-09-10T23:40:28.139943139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.224529973s" Sep 10 23:40:28.139989 containerd[1539]: time="2025-09-10T23:40:28.139973059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:40:28.140987 containerd[1539]: time="2025-09-10T23:40:28.140938940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:40:28.145494 containerd[1539]: time="2025-09-10T23:40:28.144546382Z" level=info msg="CreateContainer within sandbox \"91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:40:28.157866 containerd[1539]: time="2025-09-10T23:40:28.155549990Z" level=info msg="Container 215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:28.168226 containerd[1539]: time="2025-09-10T23:40:28.168181359Z" level=info msg="CreateContainer within sandbox \"91a757eadb6ed933d261570787dabc24f3f66df4085953dc85ecd5537782771e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\"" Sep 10 23:40:28.170904 containerd[1539]: time="2025-09-10T23:40:28.170831921Z" level=info msg="StartContainer for \"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\"" Sep 10 23:40:28.172673 containerd[1539]: time="2025-09-10T23:40:28.172618202Z" level=info msg="connecting to shim 215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191" address="unix:///run/containerd/s/f871d3f9a79cbd928b6957420fe3d0890144b50e036cd42192c22abdcfcf3ce0" protocol=ttrpc version=3 Sep 10 23:40:28.192450 systemd[1]: Started cri-containerd-215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191.scope - libcontainer container 215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191. Sep 10 23:40:28.235078 containerd[1539]: time="2025-09-10T23:40:28.235037725Z" level=info msg="StartContainer for \"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" returns successfully" Sep 10 23:40:28.356308 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:47546.service - OpenSSH per-connection server daemon (10.0.0.1:47546). Sep 10 23:40:28.459527 sshd[4815]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:28.461697 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:28.468010 systemd-logind[1520]: New session 8 of user core. Sep 10 23:40:28.477480 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:40:28.662212 kubelet[2676]: I0910 23:40:28.661612 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-psdhk" podStartSLOduration=20.435368365 podStartE2EDuration="22.66159214s" podCreationTimestamp="2025-09-10 23:40:06 +0000 UTC" firstStartedPulling="2025-09-10 23:40:25.914631685 +0000 UTC m=+42.575807431" lastFinishedPulling="2025-09-10 23:40:28.14085546 +0000 UTC m=+44.802031206" observedRunningTime="2025-09-10 23:40:28.659551419 +0000 UTC m=+45.320727165" watchObservedRunningTime="2025-09-10 23:40:28.66159214 +0000 UTC m=+45.322767886" Sep 10 23:40:28.733993 containerd[1539]: time="2025-09-10T23:40:28.733650070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" id:\"b39b7a2629844ba9f7cf9260cd44ea1f29b123578770bc44805055d7b6c0c828\" pid:4846 exit_status:1 exited_at:{seconds:1757547628 nanos:733271950}" Sep 10 23:40:28.799219 sshd[4821]: Connection closed by 10.0.0.1 port 47546 Sep 10 23:40:28.799586 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:28.802905 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:47546.service: Deactivated successfully. Sep 10 23:40:28.804992 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:40:28.807597 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:40:28.809368 systemd-logind[1520]: Removed session 8. Sep 10 23:40:28.827379 systemd-networkd[1446]: cali966d08a892f: Gained IPv6LL Sep 10 23:40:29.457898 containerd[1539]: time="2025-09-10T23:40:29.457536871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-6b52v,Uid:838ea765-8eb0-4ca7-a81a-ef19e1d4710d,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:40:29.459276 containerd[1539]: time="2025-09-10T23:40:29.459202312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwnmd,Uid:c9d769ee-2e06-45f3-90be-dacda960296b,Namespace:calico-system,Attempt:0,}" Sep 10 23:40:29.639498 systemd-networkd[1446]: cali94718335123: Link UP Sep 10 23:40:29.639773 systemd-networkd[1446]: cali94718335123: Gained carrier Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.508 [INFO][4886] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.533 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gwnmd-eth0 csi-node-driver- calico-system c9d769ee-2e06-45f3-90be-dacda960296b 717 0 2025-09-10 23:40:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gwnmd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94718335123 [] [] }} ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.533 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.568 [INFO][4924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" HandleID="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Workload="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.568 [INFO][4924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" HandleID="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Workload="localhost-k8s-csi--node--driver--gwnmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gwnmd", "timestamp":"2025-09-10 23:40:29.568369463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.568 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.568 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.568 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.583 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.590 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.595 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.597 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.600 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.600 [INFO][4924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.602 [INFO][4924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84 Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.606 [INFO][4924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.629 [INFO][4924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.630 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" host="localhost" Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.630 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:29.678145 containerd[1539]: 2025-09-10 23:40:29.630 [INFO][4924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" HandleID="k8s-pod-network.2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Workload="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.636 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gwnmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d769ee-2e06-45f3-90be-dacda960296b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gwnmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94718335123", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.637 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.637 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94718335123 ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.639 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.640 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gwnmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d769ee-2e06-45f3-90be-dacda960296b", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84", Pod:"csi-node-driver-gwnmd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94718335123", MAC:"26:37:5e:0b:32:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:29.679558 containerd[1539]: 2025-09-10 23:40:29.675 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" Namespace="calico-system" Pod="csi-node-driver-gwnmd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gwnmd-eth0" Sep 10 23:40:29.728274 containerd[1539]: time="2025-09-10T23:40:29.727611167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" id:\"ae3a4395809ca4ad1c602d53360e2300d4800018b6e86a8058da4a1541902f2e\" pid:4958 exit_status:1 exited_at:{seconds:1757547629 nanos:727291326}" Sep 10 23:40:29.764901 containerd[1539]: time="2025-09-10T23:40:29.764851951Z" level=info msg="connecting to shim 2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84" address="unix:///run/containerd/s/03d75985a12cca3d6eea613cbe008f1e2ece7fde782afd925a92e460fc1477f1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:29.767134 systemd-networkd[1446]: calif7a49ca3565: Link UP Sep 10 23:40:29.768278 systemd-networkd[1446]: calif7a49ca3565: Gained carrier Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.509 [INFO][4891] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.535 [INFO][4891] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0 calico-apiserver-6ffb66c4b7- calico-apiserver 838ea765-8eb0-4ca7-a81a-ef19e1d4710d 840 0 2025-09-10 23:40:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ffb66c4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ffb66c4b7-6b52v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7a49ca3565 [] [] }} ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.536 [INFO][4891] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.570 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" HandleID="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.570 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" HandleID="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ffb66c4b7-6b52v", "timestamp":"2025-09-10 23:40:29.570717025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.570 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.630 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.630 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.715 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.721 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.730 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.733 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.737 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.737 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.740 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9 Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.749 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.759 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.760 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" host="localhost" Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.760 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:40:29.790916 containerd[1539]: 2025-09-10 23:40:29.760 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" HandleID="k8s-pod-network.07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Workload="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.764 [INFO][4891] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0", GenerateName:"calico-apiserver-6ffb66c4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"838ea765-8eb0-4ca7-a81a-ef19e1d4710d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffb66c4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ffb66c4b7-6b52v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7a49ca3565", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.765 [INFO][4891] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.765 [INFO][4891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7a49ca3565 ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.769 [INFO][4891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.770 [INFO][4891] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0", GenerateName:"calico-apiserver-6ffb66c4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"838ea765-8eb0-4ca7-a81a-ef19e1d4710d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ffb66c4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9", Pod:"calico-apiserver-6ffb66c4b7-6b52v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7a49ca3565", MAC:"66:15:b9:cc:06:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:40:29.791499 containerd[1539]: 2025-09-10 23:40:29.786 [INFO][4891] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" Namespace="calico-apiserver" Pod="calico-apiserver-6ffb66c4b7-6b52v" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ffb66c4b7--6b52v-eth0" Sep 10 23:40:29.806462 systemd[1]: Started cri-containerd-2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84.scope - libcontainer container 2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84. Sep 10 23:40:29.822690 containerd[1539]: time="2025-09-10T23:40:29.822398508Z" level=info msg="connecting to shim 07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9" address="unix:///run/containerd/s/c74cccec748bc32a6cd107d4c603e8d2354e43f45331ed64c63fc2375dee999d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:40:29.828183 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:29.853643 systemd[1]: Started cri-containerd-07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9.scope - libcontainer container 07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9. Sep 10 23:40:29.857691 containerd[1539]: time="2025-09-10T23:40:29.857650891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwnmd,Uid:c9d769ee-2e06-45f3-90be-dacda960296b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84\"" Sep 10 23:40:29.870028 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:40:29.896055 containerd[1539]: time="2025-09-10T23:40:29.895888396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ffb66c4b7-6b52v,Uid:838ea765-8eb0-4ca7-a81a-ef19e1d4710d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9\"" Sep 10 23:40:29.915600 kubelet[2676]: I0910 23:40:29.915543 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:40:30.382754 systemd-networkd[1446]: vxlan.calico: Link UP Sep 10 23:40:30.383279 systemd-networkd[1446]: vxlan.calico: Gained carrier Sep 10 23:40:30.458568 containerd[1539]: time="2025-09-10T23:40:30.458509462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:30.459977 containerd[1539]: time="2025-09-10T23:40:30.459466663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:40:30.460619 containerd[1539]: time="2025-09-10T23:40:30.460585823Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:30.463404 containerd[1539]: time="2025-09-10T23:40:30.463347425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:30.463806 containerd[1539]: time="2025-09-10T23:40:30.463769385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.322793565s" Sep 10 23:40:30.463841 containerd[1539]: time="2025-09-10T23:40:30.463809705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:40:30.466507 containerd[1539]: time="2025-09-10T23:40:30.466470147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:40:30.470460 containerd[1539]: time="2025-09-10T23:40:30.470423069Z" level=info msg="CreateContainer within sandbox \"af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:40:30.485949 containerd[1539]: time="2025-09-10T23:40:30.485903599Z" level=info msg="Container 6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:30.492759 containerd[1539]: time="2025-09-10T23:40:30.492714843Z" level=info msg="CreateContainer within sandbox \"af6b45b081aa33528ff190be53b5cb92eda68bd42145a9c068fc1d38b78ef012\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef\"" Sep 10 23:40:30.493627 containerd[1539]: time="2025-09-10T23:40:30.493585644Z" level=info msg="StartContainer for \"6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef\"" Sep 10 23:40:30.496795 containerd[1539]: time="2025-09-10T23:40:30.496761005Z" level=info msg="connecting to shim 6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef" address="unix:///run/containerd/s/fa968413aa8e6371ce88ff7e7f35dff73c659086ba6bdbd357409b6168da1897" protocol=ttrpc version=3 Sep 10 23:40:30.519460 systemd[1]: Started cri-containerd-6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef.scope - libcontainer container 6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef. Sep 10 23:40:30.568662 containerd[1539]: time="2025-09-10T23:40:30.568356409Z" level=info msg="StartContainer for \"6ac261ac224b53d2e4380b76ba0057815cded542f1432eed7977c89985ea90ef\" returns successfully" Sep 10 23:40:30.662484 kubelet[2676]: I0910 23:40:30.661857 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-vqb6s" podStartSLOduration=25.161503247 podStartE2EDuration="29.661838506s" podCreationTimestamp="2025-09-10 23:40:01 +0000 UTC" firstStartedPulling="2025-09-10 23:40:25.964720207 +0000 UTC m=+42.625895913" lastFinishedPulling="2025-09-10 23:40:30.465055426 +0000 UTC m=+47.126231172" observedRunningTime="2025-09-10 23:40:30.657016383 +0000 UTC m=+47.318192129" watchObservedRunningTime="2025-09-10 23:40:30.661838506 +0000 UTC m=+47.323014252" Sep 10 23:40:30.741251 containerd[1539]: time="2025-09-10T23:40:30.741208114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" id:\"583d93180ca6e108c40cc6123d04d92eb284c4ffcce13a4fc2e240fbc54361d5\" pid:5229 exited_at:{seconds:1757547630 nanos:740888514}" Sep 10 23:40:30.751587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568028300.mount: Deactivated successfully. Sep 10 23:40:31.259457 systemd-networkd[1446]: calif7a49ca3565: Gained IPv6LL Sep 10 23:40:31.644146 kubelet[2676]: I0910 23:40:31.644087 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:40:31.708332 systemd-networkd[1446]: cali94718335123: Gained IPv6LL Sep 10 23:40:31.771516 systemd-networkd[1446]: vxlan.calico: Gained IPv6LL Sep 10 23:40:32.434580 containerd[1539]: time="2025-09-10T23:40:32.434508434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:32.437147 containerd[1539]: time="2025-09-10T23:40:32.437085196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:40:32.439742 containerd[1539]: time="2025-09-10T23:40:32.439699517Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:32.442895 containerd[1539]: time="2025-09-10T23:40:32.442841359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:32.443477 containerd[1539]: time="2025-09-10T23:40:32.443417159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.976900532s" Sep 10 23:40:32.443477 containerd[1539]: time="2025-09-10T23:40:32.443472039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:40:32.444598 containerd[1539]: time="2025-09-10T23:40:32.444402839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:40:32.456412 containerd[1539]: time="2025-09-10T23:40:32.456371606Z" level=info msg="CreateContainer within sandbox \"200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:40:32.465327 containerd[1539]: time="2025-09-10T23:40:32.465254931Z" level=info msg="Container ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:32.475602 containerd[1539]: time="2025-09-10T23:40:32.475535376Z" level=info msg="CreateContainer within sandbox \"200ddf12bb2d3c050544759e5d17f2da89a6b09ac2a159ef4c0bdca5ab8d81eb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\"" Sep 10 23:40:32.476197 containerd[1539]: time="2025-09-10T23:40:32.476170056Z" level=info msg="StartContainer for \"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\"" Sep 10 23:40:32.478653 containerd[1539]: time="2025-09-10T23:40:32.478620658Z" level=info msg="connecting to shim ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e" address="unix:///run/containerd/s/a9baf22803f1377ac6981e01d7cbc3d772bf2e2e52fb2fef452ec19be8a5e68b" protocol=ttrpc version=3 Sep 10 23:40:32.509493 systemd[1]: Started cri-containerd-ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e.scope - libcontainer container ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e. Sep 10 23:40:32.645627 containerd[1539]: time="2025-09-10T23:40:32.645510227Z" level=info msg="StartContainer for \"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\" returns successfully" Sep 10 23:40:32.678333 kubelet[2676]: I0910 23:40:32.677129 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55974d75bc-nmr7w" podStartSLOduration=22.051007473 podStartE2EDuration="26.677109564s" podCreationTimestamp="2025-09-10 23:40:06 +0000 UTC" firstStartedPulling="2025-09-10 23:40:27.818149108 +0000 UTC m=+44.479324854" lastFinishedPulling="2025-09-10 23:40:32.444251199 +0000 UTC m=+49.105426945" observedRunningTime="2025-09-10 23:40:32.676419163 +0000 UTC m=+49.337594909" watchObservedRunningTime="2025-09-10 23:40:32.677109564 +0000 UTC m=+49.338285310" Sep 10 23:40:32.716001 containerd[1539]: time="2025-09-10T23:40:32.715860225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\" id:\"05bae1dd9a0a20aa1f87b30ef91919dbbefd64a7cc3b4541879f6a41f924d1fe\" pid:5347 exited_at:{seconds:1757547632 nanos:713217503}" Sep 10 23:40:33.781776 containerd[1539]: time="2025-09-10T23:40:33.781125568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:33.781776 containerd[1539]: time="2025-09-10T23:40:33.781621208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:40:33.782487 containerd[1539]: time="2025-09-10T23:40:33.782459209Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:33.784374 containerd[1539]: time="2025-09-10T23:40:33.784343890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:33.784923 containerd[1539]: time="2025-09-10T23:40:33.784892570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.340455691s" Sep 10 23:40:33.784967 containerd[1539]: time="2025-09-10T23:40:33.784931410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:40:33.786099 containerd[1539]: time="2025-09-10T23:40:33.786042450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:40:33.789998 containerd[1539]: time="2025-09-10T23:40:33.789911652Z" level=info msg="CreateContainer within sandbox \"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:40:33.799501 containerd[1539]: time="2025-09-10T23:40:33.799446937Z" level=info msg="Container 8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:33.803598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285385963.mount: Deactivated successfully. Sep 10 23:40:33.808320 containerd[1539]: time="2025-09-10T23:40:33.808272982Z" level=info msg="CreateContainer within sandbox \"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f\"" Sep 10 23:40:33.809733 containerd[1539]: time="2025-09-10T23:40:33.809695702Z" level=info msg="StartContainer for \"8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f\"" Sep 10 23:40:33.811737 containerd[1539]: time="2025-09-10T23:40:33.811697663Z" level=info msg="connecting to shim 8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f" address="unix:///run/containerd/s/03d75985a12cca3d6eea613cbe008f1e2ece7fde782afd925a92e460fc1477f1" protocol=ttrpc version=3 Sep 10 23:40:33.820177 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:39610.service - OpenSSH per-connection server daemon (10.0.0.1:39610). Sep 10 23:40:33.851458 systemd[1]: Started cri-containerd-8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f.scope - libcontainer container 8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f. Sep 10 23:40:33.891465 containerd[1539]: time="2025-09-10T23:40:33.891424263Z" level=info msg="StartContainer for \"8a483f6ca3147eba9d7201f20c58090534436698c64e559c30670698c36fc36f\" returns successfully" Sep 10 23:40:33.893440 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 39610 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:33.896096 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:33.901965 systemd-logind[1520]: New session 9 of user core. Sep 10 23:40:33.915556 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:40:34.061276 containerd[1539]: time="2025-09-10T23:40:34.061138306Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:34.062871 containerd[1539]: time="2025-09-10T23:40:34.062803907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:40:34.064113 containerd[1539]: time="2025-09-10T23:40:34.064059708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 277.987258ms" Sep 10 23:40:34.064113 containerd[1539]: time="2025-09-10T23:40:34.064098468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:40:34.065167 containerd[1539]: time="2025-09-10T23:40:34.065088988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:40:34.072664 containerd[1539]: time="2025-09-10T23:40:34.072627352Z" level=info msg="CreateContainer within sandbox \"07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:40:34.084484 containerd[1539]: time="2025-09-10T23:40:34.084437197Z" level=info msg="Container 9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:34.087721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732782084.mount: Deactivated successfully. Sep 10 23:40:34.095336 containerd[1539]: time="2025-09-10T23:40:34.094710282Z" level=info msg="CreateContainer within sandbox \"07526a0b44f88ccc32def87290dcd84d78378b4b2471846c649d0906e3a497c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f\"" Sep 10 23:40:34.095597 containerd[1539]: time="2025-09-10T23:40:34.095566243Z" level=info msg="StartContainer for \"9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f\"" Sep 10 23:40:34.096817 containerd[1539]: time="2025-09-10T23:40:34.096774843Z" level=info msg="connecting to shim 9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f" address="unix:///run/containerd/s/c74cccec748bc32a6cd107d4c603e8d2354e43f45331ed64c63fc2375dee999d" protocol=ttrpc version=3 Sep 10 23:40:34.099889 sshd[5399]: Connection closed by 10.0.0.1 port 39610 Sep 10 23:40:34.099628 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:34.103041 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:39610.service: Deactivated successfully. Sep 10 23:40:34.104882 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:40:34.107342 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:40:34.109082 systemd-logind[1520]: Removed session 9. Sep 10 23:40:34.123456 systemd[1]: Started cri-containerd-9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f.scope - libcontainer container 9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f. Sep 10 23:40:34.167766 containerd[1539]: time="2025-09-10T23:40:34.167727797Z" level=info msg="StartContainer for \"9803105083d33beaae1281f3253d82f076638d91dae011ad67fd8cb53b19c82f\" returns successfully" Sep 10 23:40:35.729086 kubelet[2676]: I0910 23:40:35.729016 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ffb66c4b7-6b52v" podStartSLOduration=30.561705638 podStartE2EDuration="34.728997029s" podCreationTimestamp="2025-09-10 23:40:01 +0000 UTC" firstStartedPulling="2025-09-10 23:40:29.897563037 +0000 UTC m=+46.558738783" lastFinishedPulling="2025-09-10 23:40:34.064854428 +0000 UTC m=+50.726030174" observedRunningTime="2025-09-10 23:40:34.738767745 +0000 UTC m=+51.399943491" watchObservedRunningTime="2025-09-10 23:40:35.728997029 +0000 UTC m=+52.390172775" Sep 10 23:40:35.817548 containerd[1539]: time="2025-09-10T23:40:35.817490708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:35.841811 containerd[1539]: time="2025-09-10T23:40:35.830850234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:40:35.844540 containerd[1539]: time="2025-09-10T23:40:35.844503000Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:35.857911 containerd[1539]: time="2025-09-10T23:40:35.857860206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:40:35.858701 containerd[1539]: time="2025-09-10T23:40:35.858652366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.793477458s" Sep 10 23:40:35.858701 containerd[1539]: time="2025-09-10T23:40:35.858691806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:40:35.863454 containerd[1539]: time="2025-09-10T23:40:35.863412208Z" level=info msg="CreateContainer within sandbox \"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:40:35.883128 containerd[1539]: time="2025-09-10T23:40:35.883067737Z" level=info msg="Container 27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:40:35.900305 containerd[1539]: time="2025-09-10T23:40:35.900252824Z" level=info msg="CreateContainer within sandbox \"2f93ff77a0f6c95ed6d8636be4f8560906e03ef547d6fefcf63bd32edfb08f84\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e\"" Sep 10 23:40:35.901078 containerd[1539]: time="2025-09-10T23:40:35.900809504Z" level=info msg="StartContainer for \"27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e\"" Sep 10 23:40:35.902380 containerd[1539]: time="2025-09-10T23:40:35.902348505Z" level=info msg="connecting to shim 27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e" address="unix:///run/containerd/s/03d75985a12cca3d6eea613cbe008f1e2ece7fde782afd925a92e460fc1477f1" protocol=ttrpc version=3 Sep 10 23:40:35.928474 systemd[1]: Started cri-containerd-27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e.scope - libcontainer container 27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e. Sep 10 23:40:35.996760 containerd[1539]: time="2025-09-10T23:40:35.996651947Z" level=info msg="StartContainer for \"27fc29dcbb99b7f0a25a74bdc03455e9e5a963387903b9e83a76c1af441a100e\" returns successfully" Sep 10 23:40:36.531269 kubelet[2676]: I0910 23:40:36.531203 2676 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:40:36.531269 kubelet[2676]: I0910 23:40:36.531281 2676 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:40:39.113122 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:39622.service - OpenSSH per-connection server daemon (10.0.0.1:39622). Sep 10 23:40:39.178436 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 39622 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:39.180363 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:39.185050 systemd-logind[1520]: New session 10 of user core. Sep 10 23:40:39.194457 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:40:39.356287 sshd[5503]: Connection closed by 10.0.0.1 port 39622 Sep 10 23:40:39.356672 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:39.365749 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:39622.service: Deactivated successfully. Sep 10 23:40:39.367659 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:40:39.368303 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:40:39.374560 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:39632.service - OpenSSH per-connection server daemon (10.0.0.1:39632). Sep 10 23:40:39.375639 systemd-logind[1520]: Removed session 10. Sep 10 23:40:39.428073 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 39632 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:39.429478 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:39.434376 systemd-logind[1520]: New session 11 of user core. Sep 10 23:40:39.444431 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:40:39.676732 sshd[5519]: Connection closed by 10.0.0.1 port 39632 Sep 10 23:40:39.677441 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:39.695988 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:39632.service: Deactivated successfully. Sep 10 23:40:39.697993 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:40:39.700682 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:40:39.707909 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:39640.service - OpenSSH per-connection server daemon (10.0.0.1:39640). Sep 10 23:40:39.710693 systemd-logind[1520]: Removed session 11. Sep 10 23:40:39.765150 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 39640 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:39.767402 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:39.775344 systemd-logind[1520]: New session 12 of user core. Sep 10 23:40:39.783451 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:40:39.949585 sshd[5533]: Connection closed by 10.0.0.1 port 39640 Sep 10 23:40:39.949893 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:39.953811 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:40:39.953962 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:39640.service: Deactivated successfully. Sep 10 23:40:39.957573 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:40:39.960540 systemd-logind[1520]: Removed session 12. Sep 10 23:40:42.505608 containerd[1539]: time="2025-09-10T23:40:42.505511011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\" id:\"eedee127542641945bfb7507d3cd46b6e234b5723eed9b99f2bc27129e948f75\" pid:5570 exited_at:{seconds:1757547642 nanos:505056171}" Sep 10 23:40:44.714212 containerd[1539]: time="2025-09-10T23:40:44.714160509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" id:\"122622ece3bea2771c09d70673d116749c1a2ff575af03492c00fe5e8feff10d\" pid:5596 exited_at:{seconds:1757547644 nanos:713733029}" Sep 10 23:40:44.969064 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:48098.service - OpenSSH per-connection server daemon (10.0.0.1:48098). Sep 10 23:40:45.050917 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 48098 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:45.051958 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:45.059973 systemd-logind[1520]: New session 13 of user core. Sep 10 23:40:45.067501 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:40:45.290723 sshd[5611]: Connection closed by 10.0.0.1 port 48098 Sep 10 23:40:45.291126 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:45.306472 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:48098.service: Deactivated successfully. Sep 10 23:40:45.308723 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:40:45.310454 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:40:45.313479 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). Sep 10 23:40:45.315079 systemd-logind[1520]: Removed session 13. Sep 10 23:40:45.388619 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:45.390399 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:45.395498 systemd-logind[1520]: New session 14 of user core. Sep 10 23:40:45.402665 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:40:45.637815 sshd[5626]: Connection closed by 10.0.0.1 port 48104 Sep 10 23:40:45.638295 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:45.651977 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:48104.service: Deactivated successfully. Sep 10 23:40:45.654040 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:40:45.654943 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:40:45.657968 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:48108.service - OpenSSH per-connection server daemon (10.0.0.1:48108). Sep 10 23:40:45.660273 systemd-logind[1520]: Removed session 14. Sep 10 23:40:45.737876 sshd[5638]: Accepted publickey for core from 10.0.0.1 port 48108 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:45.739441 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:45.746973 systemd-logind[1520]: New session 15 of user core. Sep 10 23:40:45.756477 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:40:46.433510 sshd[5640]: Connection closed by 10.0.0.1 port 48108 Sep 10 23:40:46.434640 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:46.446014 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:48108.service: Deactivated successfully. Sep 10 23:40:46.450179 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:40:46.451875 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:40:46.461697 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:48110.service - OpenSSH per-connection server daemon (10.0.0.1:48110). Sep 10 23:40:46.465351 systemd-logind[1520]: Removed session 15. Sep 10 23:40:46.521181 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 48110 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:46.522746 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:46.528368 systemd-logind[1520]: New session 16 of user core. Sep 10 23:40:46.542454 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:40:46.861270 sshd[5660]: Connection closed by 10.0.0.1 port 48110 Sep 10 23:40:46.861488 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:46.872136 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:48110.service: Deactivated successfully. Sep 10 23:40:46.874290 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:40:46.876941 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:40:46.882087 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:48114.service - OpenSSH per-connection server daemon (10.0.0.1:48114). Sep 10 23:40:46.884349 systemd-logind[1520]: Removed session 16. Sep 10 23:40:46.944558 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 48114 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:46.946254 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:46.951379 systemd-logind[1520]: New session 17 of user core. Sep 10 23:40:46.956453 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:40:47.100681 sshd[5674]: Connection closed by 10.0.0.1 port 48114 Sep 10 23:40:47.101279 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:47.105429 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:48114.service: Deactivated successfully. Sep 10 23:40:47.107754 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:40:47.108739 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:40:47.110261 systemd-logind[1520]: Removed session 17. Sep 10 23:40:52.118201 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:43728.service - OpenSSH per-connection server daemon (10.0.0.1:43728). Sep 10 23:40:52.172821 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 43728 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:52.178053 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:52.184095 systemd-logind[1520]: New session 18 of user core. Sep 10 23:40:52.195448 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:40:52.333599 sshd[5701]: Connection closed by 10.0.0.1 port 43728 Sep 10 23:40:52.334010 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:52.339364 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:43728.service: Deactivated successfully. Sep 10 23:40:52.341586 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:40:52.342460 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:40:52.343885 systemd-logind[1520]: Removed session 18. Sep 10 23:40:54.322695 containerd[1539]: time="2025-09-10T23:40:54.322655144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97e4f6c73f2acedd18cb1dbfb3efabc306244380d522fdf4f78af875109596a6\" id:\"a36618fcfbd1290939be9d5e1ea47bcea95e8ca95ccc3f2b5600b02077248696\" pid:5726 exited_at:{seconds:1757547654 nanos:322358299}" Sep 10 23:40:54.344586 kubelet[2676]: I0910 23:40:54.344429 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gwnmd" podStartSLOduration=42.344604019 podStartE2EDuration="48.344296613s" podCreationTimestamp="2025-09-10 23:40:06 +0000 UTC" firstStartedPulling="2025-09-10 23:40:29.859599492 +0000 UTC m=+46.520775198" lastFinishedPulling="2025-09-10 23:40:35.859292046 +0000 UTC m=+52.520467792" observedRunningTime="2025-09-10 23:40:36.694627995 +0000 UTC m=+53.355803781" watchObservedRunningTime="2025-09-10 23:40:54.344296613 +0000 UTC m=+71.005472359" Sep 10 23:40:57.350747 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:43744.service - OpenSSH per-connection server daemon (10.0.0.1:43744). Sep 10 23:40:57.417034 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 43744 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:40:57.418792 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:40:57.428035 systemd-logind[1520]: New session 19 of user core. Sep 10 23:40:57.436812 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:40:57.621984 sshd[5742]: Connection closed by 10.0.0.1 port 43744 Sep 10 23:40:57.622285 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Sep 10 23:40:57.628727 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:43744.service: Deactivated successfully. Sep 10 23:40:57.632056 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:40:57.635808 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:40:57.637339 systemd-logind[1520]: Removed session 19. Sep 10 23:40:58.477408 kubelet[2676]: I0910 23:40:58.477365 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:41:00.736844 containerd[1539]: time="2025-09-10T23:41:00.736798911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"215d24274ad0aa243a5d253b8a0343b82d6d38fd4ba36ffd0b134e61f15f4191\" id:\"1f48edab71bdc052dad49e54e860db7c778bae2aa171a40e5c2b976c4d7edc96\" pid:5769 exited_at:{seconds:1757547660 nanos:736488987}" Sep 10 23:41:02.634423 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Sep 10 23:41:02.692179 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:41:02.691502 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:41:02.696617 systemd-logind[1520]: New session 20 of user core. Sep 10 23:41:02.698140 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:41:02.716200 containerd[1539]: time="2025-09-10T23:41:02.716150087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab51a1bed6b14d17254eb74749b56c23f941276e46bccd799c4d3d8b1d60a91e\" id:\"3d8e91f14646ccae0c77bf69a0bde86481268f0bf038715944a64353ba3222a9\" pid:5796 exited_at:{seconds:1757547662 nanos:715505518}" Sep 10 23:41:02.851825 sshd[5802]: Connection closed by 10.0.0.1 port 43812 Sep 10 23:41:02.852143 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Sep 10 23:41:02.855165 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:43812.service: Deactivated successfully. Sep 10 23:41:02.857465 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:41:02.859111 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:41:02.861510 systemd-logind[1520]: Removed session 20.