Sep 12 17:29:44.761936 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:29:44.761958 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:29:44.761967 kernel: KASLR enabled Sep 12 17:29:44.761973 kernel: efi: EFI v2.7 by EDK II Sep 12 17:29:44.761979 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 17:29:44.761984 kernel: random: crng init done Sep 12 17:29:44.761991 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:29:44.761997 kernel: secureboot: Secure boot enabled Sep 12 17:29:44.762002 kernel: ACPI: Early table checksum verification disabled Sep 12 17:29:44.762009 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 17:29:44.762015 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:29:44.762021 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762027 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762033 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762040 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762047 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762054 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762060 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762066 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762072 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:29:44.762078 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:29:44.762084 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:29:44.762090 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:44.762096 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 17:29:44.762102 kernel: Zone ranges: Sep 12 17:29:44.762110 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:44.762116 kernel: DMA32 empty Sep 12 17:29:44.762131 kernel: Normal empty Sep 12 17:29:44.762142 kernel: Device empty Sep 12 17:29:44.762151 kernel: Movable zone start for each node Sep 12 17:29:44.762158 kernel: Early memory node ranges Sep 12 17:29:44.762166 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 17:29:44.762172 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 17:29:44.762178 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 17:29:44.762184 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 17:29:44.762190 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 17:29:44.762196 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 17:29:44.762205 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 17:29:44.762211 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 17:29:44.762217 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:29:44.762226 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:29:44.762232 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:29:44.762239 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 17:29:44.762245 kernel: psci: probing for conduit method from ACPI. Sep 12 17:29:44.762253 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:29:44.762260 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:29:44.762266 kernel: psci: Trusted OS migration not required Sep 12 17:29:44.762272 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:29:44.762279 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:29:44.762285 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:29:44.762292 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:29:44.762298 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:29:44.762305 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:29:44.762312 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:29:44.762319 kernel: CPU features: detected: Spectre-v4 Sep 12 17:29:44.762325 kernel: CPU features: detected: Spectre-BHB Sep 12 17:29:44.762332 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:29:44.762338 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:29:44.762345 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:29:44.762351 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:29:44.762358 kernel: alternatives: applying boot alternatives Sep 12 17:29:44.762365 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:29:44.762372 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:29:44.762379 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:29:44.762387 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:29:44.762393 kernel: Fallback order for Node 0: 0 Sep 12 17:29:44.762400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:29:44.762406 kernel: Policy zone: DMA Sep 12 17:29:44.762412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:29:44.762419 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:29:44.762425 kernel: software IO TLB: area num 4. Sep 12 17:29:44.762432 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:29:44.762438 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 17:29:44.762445 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:29:44.762451 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:29:44.762458 kernel: rcu: RCU event tracing is enabled. Sep 12 17:29:44.762467 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:29:44.762473 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:29:44.762480 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:29:44.762487 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:29:44.762493 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:29:44.762500 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:29:44.762507 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:29:44.762513 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:29:44.762520 kernel: GICv3: 256 SPIs implemented Sep 12 17:29:44.762526 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:29:44.762540 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:29:44.762549 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:29:44.762555 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:29:44.762562 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:29:44.762568 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:29:44.762575 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:29:44.762582 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:29:44.762588 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:29:44.762595 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:29:44.762601 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:29:44.762608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:44.762615 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:29:44.762621 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:29:44.762629 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:29:44.762636 kernel: arm-pv: using stolen time PV Sep 12 17:29:44.762643 kernel: Console: colour dummy device 80x25 Sep 12 17:29:44.762649 kernel: ACPI: Core revision 20240827 Sep 12 17:29:44.762656 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:29:44.762663 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:29:44.762670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:29:44.762676 kernel: landlock: Up and running. Sep 12 17:29:44.762683 kernel: SELinux: Initializing. Sep 12 17:29:44.762690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:29:44.762698 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:29:44.762704 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:29:44.762711 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:29:44.762718 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:29:44.762725 kernel: Remapping and enabling EFI services. Sep 12 17:29:44.762731 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:29:44.762738 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:29:44.762745 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:29:44.762752 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:29:44.762764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:44.762771 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:29:44.762780 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:29:44.762787 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:29:44.762794 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:29:44.762801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:44.762808 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:29:44.762815 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:29:44.762823 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:29:44.762830 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:29:44.762838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:29:44.762845 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:29:44.762852 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:29:44.762859 kernel: SMP: Total of 4 processors activated. Sep 12 17:29:44.762866 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:29:44.762873 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:29:44.762881 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:29:44.762889 kernel: CPU features: detected: Common not Private translations Sep 12 17:29:44.762896 kernel: CPU features: detected: CRC32 instructions Sep 12 17:29:44.762903 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:29:44.762911 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:29:44.762918 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:29:44.762925 kernel: CPU features: detected: Privileged Access Never Sep 12 17:29:44.762932 kernel: CPU features: detected: RAS Extension Support Sep 12 17:29:44.762939 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:29:44.762947 kernel: alternatives: applying system-wide alternatives Sep 12 17:29:44.762956 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:29:44.762964 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Sep 12 17:29:44.762971 kernel: devtmpfs: initialized Sep 12 17:29:44.762978 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:29:44.762985 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:29:44.762992 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:29:44.762999 kernel: 0 pages in range for non-PLT usage Sep 12 17:29:44.763006 kernel: 508576 pages in range for PLT usage Sep 12 17:29:44.763013 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:29:44.763021 kernel: SMBIOS 3.0.0 present. Sep 12 17:29:44.763028 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:29:44.763035 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:29:44.763042 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:29:44.763049 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:29:44.763056 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:29:44.763063 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:29:44.763070 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:29:44.763077 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 12 17:29:44.763085 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:29:44.763092 kernel: cpuidle: using governor menu Sep 12 17:29:44.763099 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:29:44.763106 kernel: ASID allocator initialised with 32768 entries Sep 12 17:29:44.763113 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:29:44.763124 kernel: Serial: AMBA PL011 UART driver Sep 12 17:29:44.763132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:29:44.763139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:29:44.763146 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:29:44.763155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:29:44.763162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:29:44.763170 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:29:44.763177 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:29:44.763184 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:29:44.763190 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:29:44.763198 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:29:44.763205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:29:44.763212 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:29:44.763221 kernel: ACPI: Interpreter enabled Sep 12 17:29:44.763228 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:29:44.763235 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:29:44.763242 kernel: ACPI: CPU0 has been hot-added Sep 12 17:29:44.763249 kernel: ACPI: CPU1 has been hot-added Sep 12 17:29:44.763256 kernel: ACPI: CPU2 has been hot-added Sep 12 17:29:44.763263 kernel: ACPI: CPU3 has been hot-added Sep 12 17:29:44.763271 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:29:44.763278 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:29:44.763286 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:29:44.763426 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:29:44.763497 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:29:44.763586 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:29:44.763649 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:29:44.763707 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:29:44.763716 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:29:44.763726 kernel: PCI host bridge to bus 0000:00 Sep 12 17:29:44.763790 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:29:44.763845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:29:44.763902 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:29:44.763956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:29:44.764049 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:29:44.764127 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:29:44.764198 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:29:44.764260 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:29:44.764320 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:29:44.764380 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:29:44.764441 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:29:44.764501 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:29:44.764577 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:29:44.764637 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:29:44.764692 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:29:44.764701 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:29:44.764709 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:29:44.764716 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:29:44.764723 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:29:44.764731 kernel: iommu: Default domain type: Translated Sep 12 17:29:44.764738 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:29:44.764747 kernel: efivars: Registered efivars operations Sep 12 17:29:44.764754 kernel: vgaarb: loaded Sep 12 17:29:44.764761 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:29:44.764768 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:29:44.764775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:29:44.764783 kernel: pnp: PnP ACPI init Sep 12 17:29:44.764856 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:29:44.764867 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:29:44.764876 kernel: NET: Registered PF_INET protocol family Sep 12 17:29:44.764884 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:29:44.764891 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:29:44.764898 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:29:44.764906 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:29:44.764913 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:29:44.764920 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:29:44.764928 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:29:44.764935 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:29:44.764944 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:29:44.764951 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:29:44.764958 kernel: kvm [1]: HYP mode not available Sep 12 17:29:44.764965 kernel: Initialise system trusted keyrings Sep 12 17:29:44.764972 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:29:44.764979 kernel: Key type asymmetric registered Sep 12 17:29:44.764986 kernel: Asymmetric key parser 'x509' registered Sep 12 17:29:44.764993 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:29:44.765000 kernel: io scheduler mq-deadline registered Sep 12 17:29:44.765008 kernel: io scheduler kyber registered Sep 12 17:29:44.765016 kernel: io scheduler bfq registered Sep 12 17:29:44.765023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:29:44.765031 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:29:44.765039 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:29:44.765102 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:29:44.765112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:29:44.765119 kernel: thunder_xcv, ver 1.0 Sep 12 17:29:44.765132 kernel: thunder_bgx, ver 1.0 Sep 12 17:29:44.765142 kernel: nicpf, ver 1.0 Sep 12 17:29:44.765149 kernel: nicvf, ver 1.0 Sep 12 17:29:44.765222 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:29:44.765280 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:29:44 UTC (1757698184) Sep 12 17:29:44.765289 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:29:44.765297 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:29:44.765304 kernel: watchdog: NMI not fully supported Sep 12 17:29:44.765311 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:29:44.765320 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:29:44.765327 kernel: Segment Routing with IPv6 Sep 12 17:29:44.765334 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:29:44.765341 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:29:44.765348 kernel: Key type dns_resolver registered Sep 12 17:29:44.765355 kernel: registered taskstats version 1 Sep 12 17:29:44.765362 kernel: Loading compiled-in X.509 certificates Sep 12 17:29:44.765369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:29:44.765376 kernel: Demotion targets for Node 0: null Sep 12 17:29:44.765384 kernel: Key type .fscrypt registered Sep 12 17:29:44.765391 kernel: Key type fscrypt-provisioning registered Sep 12 17:29:44.765398 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:29:44.765405 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:29:44.765413 kernel: ima: No architecture policies found Sep 12 17:29:44.765420 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:29:44.765427 kernel: clk: Disabling unused clocks Sep 12 17:29:44.765434 kernel: PM: genpd: Disabling unused power domains Sep 12 17:29:44.765441 kernel: Warning: unable to open an initial console. Sep 12 17:29:44.765450 kernel: Freeing unused kernel memory: 38912K Sep 12 17:29:44.765457 kernel: Run /init as init process Sep 12 17:29:44.765464 kernel: with arguments: Sep 12 17:29:44.765471 kernel: /init Sep 12 17:29:44.765478 kernel: with environment: Sep 12 17:29:44.765484 kernel: HOME=/ Sep 12 17:29:44.765491 kernel: TERM=linux Sep 12 17:29:44.765498 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:29:44.765506 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:29:44.765518 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:29:44.765526 systemd[1]: Detected virtualization kvm. Sep 12 17:29:44.765546 systemd[1]: Detected architecture arm64. Sep 12 17:29:44.765553 systemd[1]: Running in initrd. Sep 12 17:29:44.765560 systemd[1]: No hostname configured, using default hostname. Sep 12 17:29:44.765568 systemd[1]: Hostname set to . Sep 12 17:29:44.765575 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:29:44.765585 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:29:44.765593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:44.765601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:44.765609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:29:44.765616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:29:44.765624 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:29:44.765633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:29:44.765643 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:29:44.765651 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:29:44.765658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:44.765666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:44.765674 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:29:44.765681 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:29:44.765689 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:29:44.765696 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:29:44.765705 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:29:44.765713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:29:44.765720 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:29:44.765728 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:29:44.765735 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:44.765743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:44.765751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:44.765759 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:29:44.765766 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:29:44.765775 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:29:44.765783 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:29:44.765791 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:29:44.765799 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:29:44.765807 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:29:44.765814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:29:44.765822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:44.765830 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:29:44.765839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:44.765847 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:29:44.765870 systemd-journald[243]: Collecting audit messages is disabled. Sep 12 17:29:44.765890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:29:44.765899 systemd-journald[243]: Journal started Sep 12 17:29:44.765917 systemd-journald[243]: Runtime Journal (/run/log/journal/54c35a41a93a41e7b23946e98805eb7a) is 6M, max 48.5M, 42.4M free. Sep 12 17:29:44.771633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:29:44.758934 systemd-modules-load[244]: Inserted module 'overlay' Sep 12 17:29:44.774133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:44.775653 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 12 17:29:44.777212 kernel: Bridge firewalling registered Sep 12 17:29:44.777229 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:29:44.778299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:44.779420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:29:44.783951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:29:44.785468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:29:44.788663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:29:44.800368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:29:44.808852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:44.808889 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:29:44.810728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:44.812773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:44.816694 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:29:44.818749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:44.820572 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:29:44.850686 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:29:44.864802 systemd-resolved[287]: Positive Trust Anchors: Sep 12 17:29:44.864820 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:29:44.864852 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:29:44.869828 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 17:29:44.871991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:29:44.872948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:44.925552 kernel: SCSI subsystem initialized Sep 12 17:29:44.929548 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:29:44.937555 kernel: iscsi: registered transport (tcp) Sep 12 17:29:44.950549 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:29:44.950580 kernel: QLogic iSCSI HBA Driver Sep 12 17:29:44.967316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:29:44.994268 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:29:44.996393 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:29:45.042517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:29:45.044592 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:29:45.110578 kernel: raid6: neonx8 gen() 15635 MB/s Sep 12 17:29:45.127553 kernel: raid6: neonx4 gen() 15796 MB/s Sep 12 17:29:45.144557 kernel: raid6: neonx2 gen() 13186 MB/s Sep 12 17:29:45.161554 kernel: raid6: neonx1 gen() 10447 MB/s Sep 12 17:29:45.178550 kernel: raid6: int64x8 gen() 6886 MB/s Sep 12 17:29:45.195556 kernel: raid6: int64x4 gen() 7333 MB/s Sep 12 17:29:45.212552 kernel: raid6: int64x2 gen() 6099 MB/s Sep 12 17:29:45.229708 kernel: raid6: int64x1 gen() 5049 MB/s Sep 12 17:29:45.229721 kernel: raid6: using algorithm neonx4 gen() 15796 MB/s Sep 12 17:29:45.247735 kernel: raid6: .... xor() 12343 MB/s, rmw enabled Sep 12 17:29:45.247748 kernel: raid6: using neon recovery algorithm Sep 12 17:29:45.254057 kernel: xor: measuring software checksum speed Sep 12 17:29:45.254086 kernel: 8regs : 21647 MB/sec Sep 12 17:29:45.254099 kernel: 32regs : 21681 MB/sec Sep 12 17:29:45.254812 kernel: arm64_neon : 27079 MB/sec Sep 12 17:29:45.254825 kernel: xor: using function: arm64_neon (27079 MB/sec) Sep 12 17:29:45.307557 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:29:45.313970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:29:45.316334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:45.348157 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 12 17:29:45.352255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:45.353988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:29:45.378885 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 12 17:29:45.402778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:29:45.404850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:29:45.460848 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:45.463033 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:29:45.518169 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:29:45.524201 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:29:45.527862 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:29:45.527912 kernel: GPT:9289727 != 19775487 Sep 12 17:29:45.527923 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:29:45.529551 kernel: GPT:9289727 != 19775487 Sep 12 17:29:45.529584 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:29:45.531003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:45.540419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:29:45.540556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:45.542431 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:45.546250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:45.565662 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:29:45.574158 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:29:45.576338 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:29:45.577582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:45.595168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:29:45.601081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:29:45.602094 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:29:45.603894 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:29:45.606428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:45.608230 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:29:45.610681 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:29:45.612306 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:29:45.634562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:45.634610 disk-uuid[591]: Primary Header is updated. Sep 12 17:29:45.634610 disk-uuid[591]: Secondary Entries is updated. Sep 12 17:29:45.634610 disk-uuid[591]: Secondary Header is updated. Sep 12 17:29:45.637718 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:29:46.647564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:29:46.647968 disk-uuid[597]: The operation has completed successfully. Sep 12 17:29:46.669461 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:29:46.669564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:29:46.694856 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:29:46.723435 sh[611]: Success Sep 12 17:29:46.735577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:29:46.735680 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:29:46.735702 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:29:46.744562 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:29:46.770375 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:29:46.772613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:29:46.789439 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:29:46.795659 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) Sep 12 17:29:46.797789 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:29:46.797825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:46.801552 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:29:46.801593 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:29:46.802281 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:29:46.803410 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:29:46.804415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:29:46.805154 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:29:46.807897 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:29:46.844586 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 12 17:29:46.844637 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:29:46.846690 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:46.849545 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:29:46.849594 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:29:46.854550 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:29:46.855386 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:29:46.857571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:29:46.927747 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:29:46.932214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:29:46.968618 ignition[704]: Ignition 2.21.0 Sep 12 17:29:46.968632 ignition[704]: Stage: fetch-offline Sep 12 17:29:46.968663 ignition[704]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:46.968670 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:46.968837 ignition[704]: parsed url from cmdline: "" Sep 12 17:29:46.968840 ignition[704]: no config URL provided Sep 12 17:29:46.968844 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:29:46.968851 ignition[704]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:29:46.968872 ignition[704]: op(1): [started] loading QEMU firmware config module Sep 12 17:29:46.968876 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:29:46.976968 ignition[704]: op(1): [finished] loading QEMU firmware config module Sep 12 17:29:46.982034 systemd-networkd[804]: lo: Link UP Sep 12 17:29:46.982047 systemd-networkd[804]: lo: Gained carrier Sep 12 17:29:46.982810 systemd-networkd[804]: Enumeration completed Sep 12 17:29:46.982952 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:29:46.983261 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:46.983265 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:29:46.983688 systemd-networkd[804]: eth0: Link UP Sep 12 17:29:46.984025 systemd-networkd[804]: eth0: Gained carrier Sep 12 17:29:46.984034 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:46.984721 systemd[1]: Reached target network.target - Network. Sep 12 17:29:47.007582 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:29:47.030342 ignition[704]: parsing config with SHA512: 880f8a9868f0c624193008044ac010d8f7a1f0485841b94ec030dc2e77600c563a991f1c02c8c454056458acd50f82a053bec4ffa42972919ab35c0b74041f54 Sep 12 17:29:47.034337 unknown[704]: fetched base config from "system" Sep 12 17:29:47.034349 unknown[704]: fetched user config from "qemu" Sep 12 17:29:47.034740 ignition[704]: fetch-offline: fetch-offline passed Sep 12 17:29:47.035488 systemd-resolved[287]: Detected conflict on linux IN A 10.0.0.133 Sep 12 17:29:47.034801 ignition[704]: Ignition finished successfully Sep 12 17:29:47.035496 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 12 17:29:47.037136 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:29:47.038553 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:29:47.039320 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:29:47.070794 ignition[813]: Ignition 2.21.0 Sep 12 17:29:47.070812 ignition[813]: Stage: kargs Sep 12 17:29:47.070960 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:47.070969 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:47.073513 ignition[813]: kargs: kargs passed Sep 12 17:29:47.073587 ignition[813]: Ignition finished successfully Sep 12 17:29:47.076929 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:29:47.079682 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:29:47.103820 ignition[821]: Ignition 2.21.0 Sep 12 17:29:47.103832 ignition[821]: Stage: disks Sep 12 17:29:47.104005 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:47.104015 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:47.104841 ignition[821]: disks: disks passed Sep 12 17:29:47.104889 ignition[821]: Ignition finished successfully Sep 12 17:29:47.110579 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:29:47.111520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:29:47.112833 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:29:47.114521 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:29:47.116106 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:29:47.117434 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:29:47.119622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:29:47.148437 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:29:47.152381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:29:47.154930 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:29:47.232565 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:29:47.233282 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:29:47.234438 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:29:47.236601 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:29:47.238122 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:29:47.238948 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:29:47.238993 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:29:47.239023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:29:47.250520 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:29:47.252465 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:29:47.257583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 12 17:29:47.257611 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:29:47.259829 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:47.262693 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:29:47.262732 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:29:47.264513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:29:47.289147 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:29:47.293604 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:29:47.297710 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:29:47.301581 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:29:47.371354 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:29:47.373356 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:29:47.375685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:29:47.395883 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:29:47.405576 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:29:47.414124 ignition[953]: INFO : Ignition 2.21.0 Sep 12 17:29:47.414124 ignition[953]: INFO : Stage: mount Sep 12 17:29:47.415459 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:47.415459 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:47.417698 ignition[953]: INFO : mount: mount passed Sep 12 17:29:47.417698 ignition[953]: INFO : Ignition finished successfully Sep 12 17:29:47.418496 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:29:47.420627 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:29:47.795178 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:29:47.798710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:29:47.814470 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 12 17:29:47.814510 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:29:47.814538 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:29:47.817996 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:29:47.818026 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:29:47.819470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:29:47.846959 ignition[982]: INFO : Ignition 2.21.0 Sep 12 17:29:47.846959 ignition[982]: INFO : Stage: files Sep 12 17:29:47.848310 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:47.848310 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:47.850073 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:29:47.850073 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:29:47.850073 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:29:47.853219 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:29:47.853219 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:29:47.853219 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:29:47.853219 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:29:47.852247 unknown[982]: wrote ssh authorized keys file for user: core Sep 12 17:29:47.858614 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:29:47.951173 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:29:48.356358 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:29:48.358107 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:29:48.370930 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:29:48.414690 systemd-networkd[804]: eth0: Gained IPv6LL Sep 12 17:29:48.783302 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:29:49.029366 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:29:49.029366 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:29:49.033845 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:29:49.055827 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:29:49.055827 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:29:49.055827 ignition[982]: INFO : files: files passed Sep 12 17:29:49.055827 ignition[982]: INFO : Ignition finished successfully Sep 12 17:29:49.055791 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:29:49.060691 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:29:49.064710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:29:49.086108 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:29:49.088305 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:29:49.086220 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:29:49.092035 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:49.093563 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:49.096348 initrd-setup-root-after-ignition[1017]: grep: Sep 12 17:29:49.093994 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:29:49.100815 initrd-setup-root-after-ignition[1017]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:29:49.098207 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:29:49.100290 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:29:49.160522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:29:49.160647 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:29:49.162939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:29:49.164300 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:29:49.165933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:29:49.166850 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:29:49.197447 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:29:49.200114 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:29:49.242031 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:49.245388 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:49.250498 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:29:49.252707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:29:49.252848 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:29:49.256611 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:29:49.261108 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:29:49.264255 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:29:49.265436 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:29:49.267500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:29:49.269316 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:29:49.271963 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:29:49.276328 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:29:49.278567 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:29:49.280718 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:29:49.282065 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:29:49.283614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:29:49.283786 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:29:49.286114 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:49.288173 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:49.289822 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:29:49.289928 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:49.292075 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:29:49.292311 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:29:49.296394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:29:49.296573 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:29:49.299165 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:29:49.300367 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:29:49.303596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:49.305374 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:29:49.307213 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:29:49.309395 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:29:49.309518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:29:49.311323 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:29:49.311433 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:29:49.313273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:29:49.313429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:29:49.315832 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:29:49.316203 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:29:49.319050 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:29:49.320767 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:29:49.320943 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:49.324161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:29:49.324961 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:29:49.325143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:49.326667 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:29:49.326815 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:29:49.334904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:29:49.335078 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:29:49.344484 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:29:49.351386 ignition[1037]: INFO : Ignition 2.21.0 Sep 12 17:29:49.351386 ignition[1037]: INFO : Stage: umount Sep 12 17:29:49.353611 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:29:49.353611 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:29:49.353611 ignition[1037]: INFO : umount: umount passed Sep 12 17:29:49.353611 ignition[1037]: INFO : Ignition finished successfully Sep 12 17:29:49.351933 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:29:49.352060 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:29:49.354893 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:29:49.355019 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:29:49.357206 systemd[1]: Stopped target network.target - Network. Sep 12 17:29:49.358762 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:29:49.358859 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:29:49.360469 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:29:49.360526 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:29:49.362918 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:29:49.363091 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:29:49.364399 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:29:49.364492 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:29:49.366034 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:29:49.366108 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:29:49.367749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:29:49.369195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:29:49.378561 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:29:49.378753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:29:49.382930 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:29:49.383168 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:29:49.383286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:29:49.386695 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:29:49.387363 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:29:49.389110 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:29:49.389158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:49.393751 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:29:49.394447 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:29:49.394501 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:29:49.398951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:29:49.399014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:49.401498 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:29:49.401558 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:49.403277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:29:49.403321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:49.406601 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:49.411315 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:29:49.411386 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:29:49.418606 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:29:49.418947 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:49.421292 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:29:49.421335 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:49.423127 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:29:49.423180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:49.424676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:29:49.424726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:29:49.427251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:29:49.427343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:29:49.429505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:29:49.429755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:29:49.440247 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:29:49.441220 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:29:49.441290 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:29:49.444898 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:29:49.444975 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:49.447721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:29:49.447771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:49.451687 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:29:49.451755 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:29:49.451787 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:29:49.452194 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:29:49.453710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:29:49.462914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:29:49.464136 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:29:49.466653 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:29:49.468845 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:29:49.505815 systemd[1]: Switching root. Sep 12 17:29:49.536224 systemd-journald[243]: Journal stopped Sep 12 17:29:50.431749 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 12 17:29:50.431803 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:29:50.431815 kernel: SELinux: policy capability open_perms=1 Sep 12 17:29:50.431828 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:29:50.431845 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:29:50.431856 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:29:50.431868 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:29:50.431881 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:29:50.431891 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:29:50.431899 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:29:50.431908 systemd[1]: Successfully loaded SELinux policy in 70.181ms. Sep 12 17:29:50.431923 kernel: audit: type=1403 audit(1757698189.729:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:29:50.431933 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.962ms. Sep 12 17:29:50.431944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:29:50.431957 systemd[1]: Detected virtualization kvm. Sep 12 17:29:50.431966 systemd[1]: Detected architecture arm64. Sep 12 17:29:50.431976 systemd[1]: Detected first boot. Sep 12 17:29:50.431986 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:29:50.431996 zram_generator::config[1082]: No configuration found. Sep 12 17:29:50.432006 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:29:50.432015 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:29:50.432030 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:29:50.432041 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:29:50.432052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:29:50.432062 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:29:50.432072 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:29:50.432081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:29:50.432100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:29:50.432113 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:29:50.432123 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:29:50.432133 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:29:50.432145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:29:50.432155 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:29:50.432165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:29:50.432175 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:29:50.432186 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:29:50.432204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:29:50.432214 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:29:50.432224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:29:50.432235 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:29:50.432248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:29:50.432258 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:29:50.432268 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:29:50.432278 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:29:50.432288 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:29:50.432298 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:29:50.432308 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:29:50.432318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:29:50.432330 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:29:50.432341 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:29:50.432351 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:29:50.432362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:29:50.432373 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:29:50.432383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:29:50.432393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:29:50.432404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:29:50.432415 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:29:50.432427 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:29:50.432437 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:29:50.432447 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:29:50.432458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:29:50.432469 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:29:50.432479 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:29:50.432489 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:29:50.432500 systemd[1]: Reached target machines.target - Containers. Sep 12 17:29:50.432509 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:29:50.432521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:50.432544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:29:50.432555 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:29:50.432565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:50.432575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:29:50.432585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:50.432595 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:29:50.432605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:50.432616 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:29:50.432626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:29:50.432636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:29:50.432645 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:29:50.432655 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:29:50.432665 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:29:50.432675 kernel: fuse: init (API version 7.41) Sep 12 17:29:50.432685 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:29:50.432696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:29:50.432707 kernel: ACPI: bus type drm_connector registered Sep 12 17:29:50.432716 kernel: loop: module loaded Sep 12 17:29:50.432740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:29:50.432751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:29:50.432762 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:29:50.432772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:29:50.432783 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:29:50.432794 systemd[1]: Stopped verity-setup.service. Sep 12 17:29:50.432828 systemd-journald[1153]: Collecting audit messages is disabled. Sep 12 17:29:50.432854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:29:50.432866 systemd-journald[1153]: Journal started Sep 12 17:29:50.432888 systemd-journald[1153]: Runtime Journal (/run/log/journal/54c35a41a93a41e7b23946e98805eb7a) is 6M, max 48.5M, 42.4M free. Sep 12 17:29:50.183993 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:29:50.204865 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:29:50.205287 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:29:50.437759 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:29:50.439247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:29:50.440399 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:29:50.441343 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:29:50.442506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:29:50.443497 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:29:50.446560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:29:50.447741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:29:50.448973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:29:50.449164 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:29:50.450365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:50.450582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:50.451748 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:29:50.451910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:29:50.453060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:50.453237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:50.454609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:29:50.454762 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:29:50.455879 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:50.456050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:50.457411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:29:50.458718 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:29:50.459943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:29:50.461563 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:29:50.473589 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:29:50.475776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:29:50.477699 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:29:50.478591 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:29:50.478623 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:29:50.480268 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:29:50.491409 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:29:50.492537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:50.493719 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:29:50.495594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:29:50.496595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:29:50.499674 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:29:50.500669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:29:50.501712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:29:50.503686 systemd-journald[1153]: Time spent on flushing to /var/log/journal/54c35a41a93a41e7b23946e98805eb7a is 24.548ms for 886 entries. Sep 12 17:29:50.503686 systemd-journald[1153]: System Journal (/var/log/journal/54c35a41a93a41e7b23946e98805eb7a) is 8M, max 195.6M, 187.6M free. Sep 12 17:29:50.547580 systemd-journald[1153]: Received client request to flush runtime journal. Sep 12 17:29:50.547640 kernel: loop0: detected capacity change from 0 to 100608 Sep 12 17:29:50.503820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:29:50.507326 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:29:50.514353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:29:50.518517 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:29:50.521097 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:29:50.540718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:29:50.542254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:29:50.545830 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:29:50.548213 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:29:50.550714 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:29:50.565691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:29:50.580950 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:29:50.583683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:29:50.587238 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:29:50.592693 kernel: loop1: detected capacity change from 0 to 207008 Sep 12 17:29:50.614565 kernel: loop2: detected capacity change from 0 to 119320 Sep 12 17:29:50.617226 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 12 17:29:50.617245 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 12 17:29:50.620944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:29:50.646565 kernel: loop3: detected capacity change from 0 to 100608 Sep 12 17:29:50.652576 kernel: loop4: detected capacity change from 0 to 207008 Sep 12 17:29:50.658550 kernel: loop5: detected capacity change from 0 to 119320 Sep 12 17:29:50.664380 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:29:50.664793 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 12 17:29:50.669459 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:29:50.669475 systemd[1]: Reloading... Sep 12 17:29:50.729573 zram_generator::config[1250]: No configuration found. Sep 12 17:29:50.838741 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:29:50.880171 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:29:50.880486 systemd[1]: Reloading finished in 210 ms. Sep 12 17:29:50.910956 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:29:50.912794 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:29:50.928868 systemd[1]: Starting ensure-sysext.service... Sep 12 17:29:50.931374 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:29:50.942608 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:29:50.942623 systemd[1]: Reloading... Sep 12 17:29:50.946753 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:29:50.947112 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:29:50.947447 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:29:50.947744 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:29:50.948469 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:29:50.948799 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 12 17:29:50.948918 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 12 17:29:50.951729 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:29:50.951838 systemd-tmpfiles[1285]: Skipping /boot Sep 12 17:29:50.960360 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:29:50.960485 systemd-tmpfiles[1285]: Skipping /boot Sep 12 17:29:50.987575 zram_generator::config[1312]: No configuration found. Sep 12 17:29:51.123617 systemd[1]: Reloading finished in 180 ms. Sep 12 17:29:51.147778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:29:51.166727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:29:51.175105 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:29:51.178123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:29:51.206472 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:29:51.214282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:29:51.224493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:29:51.226959 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:29:51.233194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:51.238402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:51.241713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:51.246290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:51.247675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:51.247826 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:29:51.250297 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:29:51.252537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:51.254581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:51.256244 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:51.256426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:51.266015 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Sep 12 17:29:51.268899 augenrules[1379]: No rules Sep 12 17:29:51.270808 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:29:51.273647 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:29:51.273867 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:29:51.275660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:51.275852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:51.280801 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:29:51.288400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:29:51.289648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:29:51.290975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:29:51.293818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:29:51.300908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:29:51.327830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:29:51.329178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:29:51.329319 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:29:51.333662 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:29:51.335705 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:29:51.338104 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:29:51.341586 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:29:51.343006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:29:51.345642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:29:51.347440 augenrules[1388]: /sbin/augenrules: No change Sep 12 17:29:51.347433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:29:51.349976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:29:51.352254 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:29:51.352437 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:29:51.365825 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:29:51.368216 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:29:51.368424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:29:51.380569 systemd[1]: Finished ensure-sysext.service. Sep 12 17:29:51.385704 augenrules[1449]: No rules Sep 12 17:29:51.387593 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:29:51.387786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:29:51.391703 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:29:51.405481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:29:51.410860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:29:51.414980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:29:51.415804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:29:51.415873 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:29:51.417803 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:29:51.418720 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:29:51.439348 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:29:51.465318 systemd-resolved[1352]: Positive Trust Anchors: Sep 12 17:29:51.465646 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:29:51.465726 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:29:51.473684 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 12 17:29:51.474056 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:29:51.475393 systemd-networkd[1461]: lo: Link UP Sep 12 17:29:51.475406 systemd-networkd[1461]: lo: Gained carrier Sep 12 17:29:51.475779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:29:51.476225 systemd-networkd[1461]: Enumeration completed Sep 12 17:29:51.476639 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:51.476647 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:29:51.477071 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:29:51.477267 systemd-networkd[1461]: eth0: Link UP Sep 12 17:29:51.477374 systemd-networkd[1461]: eth0: Gained carrier Sep 12 17:29:51.477389 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:29:51.478456 systemd[1]: Reached target network.target - Network. Sep 12 17:29:51.479612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:29:51.480888 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:29:51.482336 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:29:51.483640 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:29:51.484829 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:29:51.485748 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:29:51.485779 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:29:51.486845 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:29:51.488246 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:29:51.489538 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:29:51.490684 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:29:51.492338 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:29:51.492604 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:29:51.494955 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:29:51.495183 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Sep 12 17:29:51.496246 systemd-timesyncd[1462]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:29:51.496372 systemd-timesyncd[1462]: Initial clock synchronization to Fri 2025-09-12 17:29:51.365482 UTC. Sep 12 17:29:51.497691 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:29:51.498992 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:29:51.500122 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:29:51.504395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:29:51.505904 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:29:51.510714 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:29:51.514723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:29:51.520644 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:29:51.522265 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:29:51.523100 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:29:51.524079 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:29:51.524137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:29:51.526632 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:29:51.529751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:29:51.534776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:29:51.537752 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:29:51.541727 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:29:51.544613 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:29:51.549741 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:29:51.555109 jq[1488]: false Sep 12 17:29:51.552827 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:29:51.555553 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:29:51.557747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:29:51.562680 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:29:51.564479 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:29:51.565066 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:29:51.567803 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:29:51.572647 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:29:51.586251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:29:51.589923 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:29:51.590162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:29:51.590889 extend-filesystems[1489]: Found /dev/vda6 Sep 12 17:29:51.591388 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:29:51.591615 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:29:51.593929 jq[1503]: true Sep 12 17:29:51.593923 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:29:51.594682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:29:51.606387 extend-filesystems[1489]: Found /dev/vda9 Sep 12 17:29:51.611415 jq[1512]: true Sep 12 17:29:51.613900 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:29:51.618631 update_engine[1499]: I20250912 17:29:51.618368 1499 main.cc:92] Flatcar Update Engine starting Sep 12 17:29:51.619400 tar[1510]: linux-arm64/LICENSE Sep 12 17:29:51.620715 tar[1510]: linux-arm64/helm Sep 12 17:29:51.623182 extend-filesystems[1489]: Checking size of /dev/vda9 Sep 12 17:29:51.623227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:29:51.639759 dbus-daemon[1486]: [system] SELinux support is enabled Sep 12 17:29:51.639962 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:29:51.645134 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:29:51.645181 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:29:51.646776 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:29:51.646802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:29:51.647133 update_engine[1499]: I20250912 17:29:51.647071 1499 update_check_scheduler.cc:74] Next update check in 11m23s Sep 12 17:29:51.648787 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:29:51.651024 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:29:51.654105 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:29:51.665927 extend-filesystems[1489]: Resized partition /dev/vda9 Sep 12 17:29:51.668684 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:29:51.679233 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:29:51.722581 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:29:51.745896 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:29:51.745896 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:29:51.745896 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:29:51.756467 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Sep 12 17:29:51.748988 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:29:51.759373 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:29:51.752905 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:29:51.753113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:29:51.760555 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:29:51.760779 systemd-logind[1496]: New seat seat0. Sep 12 17:29:51.778055 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:29:51.780612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:29:51.786414 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:29:51.795663 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:29:51.825837 containerd[1514]: time="2025-09-12T17:29:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:29:51.826619 containerd[1514]: time="2025-09-12T17:29:51.826577080Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:29:51.838036 containerd[1514]: time="2025-09-12T17:29:51.837925520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.2µs" Sep 12 17:29:51.838036 containerd[1514]: time="2025-09-12T17:29:51.838017800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:29:51.838036 containerd[1514]: time="2025-09-12T17:29:51.838043960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:29:51.838244 containerd[1514]: time="2025-09-12T17:29:51.838221680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:29:51.838270 containerd[1514]: time="2025-09-12T17:29:51.838248440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:29:51.838287 containerd[1514]: time="2025-09-12T17:29:51.838274440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838348 containerd[1514]: time="2025-09-12T17:29:51.838329600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838375 containerd[1514]: time="2025-09-12T17:29:51.838348240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838730 containerd[1514]: time="2025-09-12T17:29:51.838701880Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838730 containerd[1514]: time="2025-09-12T17:29:51.838727640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838789 containerd[1514]: time="2025-09-12T17:29:51.838741320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838789 containerd[1514]: time="2025-09-12T17:29:51.838749320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:29:51.838845 containerd[1514]: time="2025-09-12T17:29:51.838828160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:29:51.839112 containerd[1514]: time="2025-09-12T17:29:51.839078640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:29:51.839140 containerd[1514]: time="2025-09-12T17:29:51.839128920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:29:51.839159 containerd[1514]: time="2025-09-12T17:29:51.839139280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:29:51.839190 containerd[1514]: time="2025-09-12T17:29:51.839176600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:29:51.840373 containerd[1514]: time="2025-09-12T17:29:51.840338040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:29:51.840671 containerd[1514]: time="2025-09-12T17:29:51.840636960Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:29:51.854287 containerd[1514]: time="2025-09-12T17:29:51.854224400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:29:51.854387 containerd[1514]: time="2025-09-12T17:29:51.854323960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:29:51.854387 containerd[1514]: time="2025-09-12T17:29:51.854340440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:29:51.854387 containerd[1514]: time="2025-09-12T17:29:51.854352920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:29:51.854465 containerd[1514]: time="2025-09-12T17:29:51.854414360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:29:51.854465 containerd[1514]: time="2025-09-12T17:29:51.854428840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:29:51.854465 containerd[1514]: time="2025-09-12T17:29:51.854440920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:29:51.854465 containerd[1514]: time="2025-09-12T17:29:51.854455120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:29:51.854543 containerd[1514]: time="2025-09-12T17:29:51.854467000Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:29:51.854543 containerd[1514]: time="2025-09-12T17:29:51.854477960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:29:51.854543 containerd[1514]: time="2025-09-12T17:29:51.854488320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:29:51.854543 containerd[1514]: time="2025-09-12T17:29:51.854506520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:29:51.854707 containerd[1514]: time="2025-09-12T17:29:51.854684080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:29:51.854741 containerd[1514]: time="2025-09-12T17:29:51.854713200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:29:51.854741 containerd[1514]: time="2025-09-12T17:29:51.854732680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:29:51.854782 containerd[1514]: time="2025-09-12T17:29:51.854749760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:29:51.854782 containerd[1514]: time="2025-09-12T17:29:51.854769120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:29:51.854814 containerd[1514]: time="2025-09-12T17:29:51.854780520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:29:51.854814 containerd[1514]: time="2025-09-12T17:29:51.854792560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:29:51.854814 containerd[1514]: time="2025-09-12T17:29:51.854804400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:29:51.854868 containerd[1514]: time="2025-09-12T17:29:51.854817600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:29:51.854868 containerd[1514]: time="2025-09-12T17:29:51.854829800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:29:51.854868 containerd[1514]: time="2025-09-12T17:29:51.854841520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:29:51.855057 containerd[1514]: time="2025-09-12T17:29:51.855038240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:29:51.855103 containerd[1514]: time="2025-09-12T17:29:51.855061720Z" level=info msg="Start snapshots syncer" Sep 12 17:29:51.855138 containerd[1514]: time="2025-09-12T17:29:51.855124320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:29:51.855869 containerd[1514]: time="2025-09-12T17:29:51.855815880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:29:51.855986 containerd[1514]: time="2025-09-12T17:29:51.855913680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:29:51.856054 containerd[1514]: time="2025-09-12T17:29:51.856032640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:29:51.856331 containerd[1514]: time="2025-09-12T17:29:51.856306120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:29:51.856358 containerd[1514]: time="2025-09-12T17:29:51.856341200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:29:51.856358 containerd[1514]: time="2025-09-12T17:29:51.856353920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:29:51.856399 containerd[1514]: time="2025-09-12T17:29:51.856365480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:29:51.856399 containerd[1514]: time="2025-09-12T17:29:51.856377480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:29:51.856399 containerd[1514]: time="2025-09-12T17:29:51.856389280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:29:51.856446 containerd[1514]: time="2025-09-12T17:29:51.856399600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:29:51.856446 containerd[1514]: time="2025-09-12T17:29:51.856426000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:29:51.856446 containerd[1514]: time="2025-09-12T17:29:51.856438520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:29:51.856496 containerd[1514]: time="2025-09-12T17:29:51.856448480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:29:51.856496 containerd[1514]: time="2025-09-12T17:29:51.856491800Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:29:51.856539 containerd[1514]: time="2025-09-12T17:29:51.856506040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:29:51.856539 containerd[1514]: time="2025-09-12T17:29:51.856515320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:29:51.856574 containerd[1514]: time="2025-09-12T17:29:51.856524480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:29:51.856574 containerd[1514]: time="2025-09-12T17:29:51.856550160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:29:51.856574 containerd[1514]: time="2025-09-12T17:29:51.856566160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:29:51.856629 containerd[1514]: time="2025-09-12T17:29:51.856578840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:29:51.856671 containerd[1514]: time="2025-09-12T17:29:51.856656080Z" level=info msg="runtime interface created" Sep 12 17:29:51.856671 containerd[1514]: time="2025-09-12T17:29:51.856666320Z" level=info msg="created NRI interface" Sep 12 17:29:51.856708 containerd[1514]: time="2025-09-12T17:29:51.856675440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:29:51.856708 containerd[1514]: time="2025-09-12T17:29:51.856689240Z" level=info msg="Connect containerd service" Sep 12 17:29:51.856744 containerd[1514]: time="2025-09-12T17:29:51.856715520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:29:51.857712 containerd[1514]: time="2025-09-12T17:29:51.857678600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:29:51.932940 containerd[1514]: time="2025-09-12T17:29:51.932814920Z" level=info msg="Start subscribing containerd event" Sep 12 17:29:51.932940 containerd[1514]: time="2025-09-12T17:29:51.932895520Z" level=info msg="Start recovering state" Sep 12 17:29:51.933044 containerd[1514]: time="2025-09-12T17:29:51.932978440Z" level=info msg="Start event monitor" Sep 12 17:29:51.933044 containerd[1514]: time="2025-09-12T17:29:51.932990920Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:29:51.933044 containerd[1514]: time="2025-09-12T17:29:51.933001040Z" level=info msg="Start streaming server" Sep 12 17:29:51.933126 containerd[1514]: time="2025-09-12T17:29:51.933072680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:29:51.933126 containerd[1514]: time="2025-09-12T17:29:51.933089760Z" level=info msg="runtime interface starting up..." Sep 12 17:29:51.933126 containerd[1514]: time="2025-09-12T17:29:51.933097760Z" level=info msg="starting plugins..." Sep 12 17:29:51.933126 containerd[1514]: time="2025-09-12T17:29:51.933098160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:29:51.933194 containerd[1514]: time="2025-09-12T17:29:51.933116200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:29:51.933194 containerd[1514]: time="2025-09-12T17:29:51.933159160Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:29:51.933303 containerd[1514]: time="2025-09-12T17:29:51.933284920Z" level=info msg="containerd successfully booted in 0.107841s" Sep 12 17:29:51.933397 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:29:51.985374 tar[1510]: linux-arm64/README.md Sep 12 17:29:52.003080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:29:52.090627 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:29:52.127745 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:29:52.134734 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:29:52.157727 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:29:52.157970 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:29:52.162594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:29:52.193940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:29:52.201264 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:29:52.203366 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:29:52.204686 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:29:53.086682 systemd-networkd[1461]: eth0: Gained IPv6LL Sep 12 17:29:53.090302 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:29:53.092262 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:29:53.095224 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:29:53.100254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:29:53.105009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:29:53.144338 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:29:53.146113 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:29:53.148156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:29:53.152450 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:29:53.766194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:29:53.768296 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:29:53.773639 systemd[1]: Startup finished in 2.056s (kernel) + 5.107s (initrd) + 4.114s (userspace) = 11.278s. Sep 12 17:29:53.789151 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:29:54.278937 kubelet[1626]: E0912 17:29:54.278851 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:29:54.281604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:29:54.281747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:29:54.282130 systemd[1]: kubelet.service: Consumed 769ms CPU time, 256.9M memory peak. Sep 12 17:29:57.841324 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:29:57.843575 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:60950.service - OpenSSH per-connection server daemon (10.0.0.1:60950). Sep 12 17:29:57.942059 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 60950 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:57.946348 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:57.954981 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:29:57.961622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:29:57.969457 systemd-logind[1496]: New session 1 of user core. Sep 12 17:29:57.981587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:29:57.986841 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:29:58.014796 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:29:58.018569 systemd-logind[1496]: New session c1 of user core. Sep 12 17:29:58.139584 systemd[1645]: Queued start job for default target default.target. Sep 12 17:29:58.148485 systemd[1645]: Created slice app.slice - User Application Slice. Sep 12 17:29:58.148515 systemd[1645]: Reached target paths.target - Paths. Sep 12 17:29:58.148577 systemd[1645]: Reached target timers.target - Timers. Sep 12 17:29:58.149726 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:29:58.158890 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:29:58.158948 systemd[1645]: Reached target sockets.target - Sockets. Sep 12 17:29:58.158981 systemd[1645]: Reached target basic.target - Basic System. Sep 12 17:29:58.159006 systemd[1645]: Reached target default.target - Main User Target. Sep 12 17:29:58.159028 systemd[1645]: Startup finished in 132ms. Sep 12 17:29:58.161168 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:29:58.164038 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:29:58.226352 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:60962.service - OpenSSH per-connection server daemon (10.0.0.1:60962). Sep 12 17:29:58.285833 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 60962 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:58.287140 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:58.291400 systemd-logind[1496]: New session 2 of user core. Sep 12 17:29:58.306733 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:29:58.361083 sshd[1659]: Connection closed by 10.0.0.1 port 60962 Sep 12 17:29:58.361350 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 12 17:29:58.372125 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:60962.service: Deactivated successfully. Sep 12 17:29:58.375239 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:29:58.377632 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:29:58.380960 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:60968.service - OpenSSH per-connection server daemon (10.0.0.1:60968). Sep 12 17:29:58.382048 systemd-logind[1496]: Removed session 2. Sep 12 17:29:58.444831 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 60968 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:58.447575 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:58.452264 systemd-logind[1496]: New session 3 of user core. Sep 12 17:29:58.461752 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:29:58.513419 sshd[1669]: Connection closed by 10.0.0.1 port 60968 Sep 12 17:29:58.513694 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 12 17:29:58.526643 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:60968.service: Deactivated successfully. Sep 12 17:29:58.529520 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:29:58.532078 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:29:58.535448 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:60982.service - OpenSSH per-connection server daemon (10.0.0.1:60982). Sep 12 17:29:58.537665 systemd-logind[1496]: Removed session 3. Sep 12 17:29:58.593416 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 60982 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:58.595141 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:58.599551 systemd-logind[1496]: New session 4 of user core. Sep 12 17:29:58.616478 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:29:58.676181 sshd[1678]: Connection closed by 10.0.0.1 port 60982 Sep 12 17:29:58.676654 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 12 17:29:58.698482 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:60982.service: Deactivated successfully. Sep 12 17:29:58.701946 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:29:58.704129 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:29:58.709134 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:60986.service - OpenSSH per-connection server daemon (10.0.0.1:60986). Sep 12 17:29:58.710351 systemd-logind[1496]: Removed session 4. Sep 12 17:29:58.769840 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 60986 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:58.771310 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:58.778552 systemd-logind[1496]: New session 5 of user core. Sep 12 17:29:58.789731 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:29:58.848092 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:29:58.848347 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:29:58.861651 sudo[1688]: pam_unix(sudo:session): session closed for user root Sep 12 17:29:58.863866 sshd[1687]: Connection closed by 10.0.0.1 port 60986 Sep 12 17:29:58.863750 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 12 17:29:58.881718 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:60986.service: Deactivated successfully. Sep 12 17:29:58.885402 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:29:58.888625 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:29:58.894260 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:60992.service - OpenSSH per-connection server daemon (10.0.0.1:60992). Sep 12 17:29:58.896076 systemd-logind[1496]: Removed session 5. Sep 12 17:29:58.971143 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 60992 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:58.972926 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:58.978496 systemd-logind[1496]: New session 6 of user core. Sep 12 17:29:58.988795 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:29:59.042906 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:29:59.043681 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:29:59.225508 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 12 17:29:59.231008 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:29:59.231280 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:29:59.244928 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:29:59.289556 augenrules[1721]: No rules Sep 12 17:29:59.290777 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:29:59.291009 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:29:59.293083 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 12 17:29:59.296008 sshd[1697]: Connection closed by 10.0.0.1 port 60992 Sep 12 17:29:59.295450 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 12 17:29:59.304697 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:60992.service: Deactivated successfully. Sep 12 17:29:59.306373 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:29:59.307537 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:29:59.312948 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:60998.service - OpenSSH per-connection server daemon (10.0.0.1:60998). Sep 12 17:29:59.319539 systemd-logind[1496]: Removed session 6. Sep 12 17:29:59.376600 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 60998 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:29:59.379738 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:29:59.385776 systemd-logind[1496]: New session 7 of user core. Sep 12 17:29:59.395729 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:29:59.449963 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:29:59.450621 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:29:59.750418 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:29:59.763898 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:29:59.968614 dockerd[1754]: time="2025-09-12T17:29:59.967639794Z" level=info msg="Starting up" Sep 12 17:29:59.969336 dockerd[1754]: time="2025-09-12T17:29:59.969303104Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:29:59.980427 dockerd[1754]: time="2025-09-12T17:29:59.980367978Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:29:59.996728 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2780843535-merged.mount: Deactivated successfully. Sep 12 17:30:00.017493 dockerd[1754]: time="2025-09-12T17:30:00.017390483Z" level=info msg="Loading containers: start." Sep 12 17:30:00.026552 kernel: Initializing XFRM netlink socket Sep 12 17:30:00.267753 systemd-networkd[1461]: docker0: Link UP Sep 12 17:30:00.272314 dockerd[1754]: time="2025-09-12T17:30:00.272268161Z" level=info msg="Loading containers: done." Sep 12 17:30:00.285702 dockerd[1754]: time="2025-09-12T17:30:00.285648011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:30:00.285855 dockerd[1754]: time="2025-09-12T17:30:00.285742150Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:30:00.285855 dockerd[1754]: time="2025-09-12T17:30:00.285835374Z" level=info msg="Initializing buildkit" Sep 12 17:30:00.309436 dockerd[1754]: time="2025-09-12T17:30:00.309358043Z" level=info msg="Completed buildkit initialization" Sep 12 17:30:00.316589 dockerd[1754]: time="2025-09-12T17:30:00.316541074Z" level=info msg="Daemon has completed initialization" Sep 12 17:30:00.316820 dockerd[1754]: time="2025-09-12T17:30:00.316614594Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:30:00.316907 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:30:00.820576 containerd[1514]: time="2025-09-12T17:30:00.820136271Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:30:01.383331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616386100.mount: Deactivated successfully. Sep 12 17:30:02.345013 containerd[1514]: time="2025-09-12T17:30:02.344963343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:02.346007 containerd[1514]: time="2025-09-12T17:30:02.345960933Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 12 17:30:02.347251 containerd[1514]: time="2025-09-12T17:30:02.346855605Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:02.349143 containerd[1514]: time="2025-09-12T17:30:02.349108197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:02.350134 containerd[1514]: time="2025-09-12T17:30:02.350103198Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.529915574s" Sep 12 17:30:02.350185 containerd[1514]: time="2025-09-12T17:30:02.350138566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:30:02.350680 containerd[1514]: time="2025-09-12T17:30:02.350661636Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:30:03.407744 containerd[1514]: time="2025-09-12T17:30:03.407688217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:03.409177 containerd[1514]: time="2025-09-12T17:30:03.409144985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 12 17:30:03.410036 containerd[1514]: time="2025-09-12T17:30:03.409991693Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:03.413569 containerd[1514]: time="2025-09-12T17:30:03.413091398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:03.414342 containerd[1514]: time="2025-09-12T17:30:03.414050452Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.063362687s" Sep 12 17:30:03.414342 containerd[1514]: time="2025-09-12T17:30:03.414081447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:30:03.414797 containerd[1514]: time="2025-09-12T17:30:03.414764853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:30:04.532122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:30:04.533913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:04.773608 containerd[1514]: time="2025-09-12T17:30:04.773566856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:04.774487 containerd[1514]: time="2025-09-12T17:30:04.774456615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 12 17:30:04.776509 containerd[1514]: time="2025-09-12T17:30:04.775733590Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:04.777834 containerd[1514]: time="2025-09-12T17:30:04.777802693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:04.778842 containerd[1514]: time="2025-09-12T17:30:04.778805943Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.364010252s" Sep 12 17:30:04.778842 containerd[1514]: time="2025-09-12T17:30:04.778840014Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:30:04.779258 containerd[1514]: time="2025-09-12T17:30:04.779234803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:30:04.793295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:04.797148 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:30:04.833742 kubelet[2044]: E0912 17:30:04.833678 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:30:04.836767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:30:04.836912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:04.837211 systemd[1]: kubelet.service: Consumed 142ms CPU time, 106.5M memory peak. Sep 12 17:30:05.741354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952702722.mount: Deactivated successfully. Sep 12 17:30:06.319454 containerd[1514]: time="2025-09-12T17:30:06.319381852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:06.319914 containerd[1514]: time="2025-09-12T17:30:06.319872424Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 12 17:30:06.320808 containerd[1514]: time="2025-09-12T17:30:06.320767174Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:06.324421 containerd[1514]: time="2025-09-12T17:30:06.324383372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:06.325177 containerd[1514]: time="2025-09-12T17:30:06.325140220Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.545873534s" Sep 12 17:30:06.325217 containerd[1514]: time="2025-09-12T17:30:06.325177297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:30:06.325787 containerd[1514]: time="2025-09-12T17:30:06.325748362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:30:06.892211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714829014.mount: Deactivated successfully. Sep 12 17:30:07.586847 containerd[1514]: time="2025-09-12T17:30:07.586750018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.587285 containerd[1514]: time="2025-09-12T17:30:07.587253491Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:30:07.588154 containerd[1514]: time="2025-09-12T17:30:07.588121591Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.591023 containerd[1514]: time="2025-09-12T17:30:07.590986678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:07.592853 containerd[1514]: time="2025-09-12T17:30:07.592811561Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.267025562s" Sep 12 17:30:07.592914 containerd[1514]: time="2025-09-12T17:30:07.592851556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:30:07.593276 containerd[1514]: time="2025-09-12T17:30:07.593254822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:30:08.029678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445119473.mount: Deactivated successfully. Sep 12 17:30:08.034278 containerd[1514]: time="2025-09-12T17:30:08.034235436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:08.034943 containerd[1514]: time="2025-09-12T17:30:08.034911465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:30:08.036550 containerd[1514]: time="2025-09-12T17:30:08.035645804Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:08.037929 containerd[1514]: time="2025-09-12T17:30:08.037877456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:30:08.039261 containerd[1514]: time="2025-09-12T17:30:08.039232027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 445.938522ms" Sep 12 17:30:08.039322 containerd[1514]: time="2025-09-12T17:30:08.039264771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:30:08.039705 containerd[1514]: time="2025-09-12T17:30:08.039678405Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:30:08.530635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634876612.mount: Deactivated successfully. Sep 12 17:30:10.135594 containerd[1514]: time="2025-09-12T17:30:10.135551424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.137541 containerd[1514]: time="2025-09-12T17:30:10.137434200Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 12 17:30:10.268991 containerd[1514]: time="2025-09-12T17:30:10.268905188Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.272120 containerd[1514]: time="2025-09-12T17:30:10.272074069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:10.274263 containerd[1514]: time="2025-09-12T17:30:10.274147158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.234439197s" Sep 12 17:30:10.274263 containerd[1514]: time="2025-09-12T17:30:10.274180792Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:30:15.087419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:30:15.089349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:15.232547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:15.236370 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:30:15.270070 kubelet[2203]: E0912 17:30:15.270008 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:30:15.272461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:30:15.272710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:15.274687 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.3M memory peak. Sep 12 17:30:16.267266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:16.267899 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.3M memory peak. Sep 12 17:30:16.270761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:16.295959 systemd[1]: Reload requested from client PID 2219 ('systemctl') (unit session-7.scope)... Sep 12 17:30:16.295975 systemd[1]: Reloading... Sep 12 17:30:16.367628 zram_generator::config[2262]: No configuration found. Sep 12 17:30:16.633797 systemd[1]: Reloading finished in 337 ms. Sep 12 17:30:16.694103 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:30:16.694188 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:30:16.694544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:16.694603 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95.1M memory peak. Sep 12 17:30:16.696224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:16.820853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:16.835893 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:30:16.873074 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:16.873074 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:30:16.873074 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:16.873406 kubelet[2307]: I0912 17:30:16.873132 2307 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:30:17.716111 kubelet[2307]: I0912 17:30:17.716063 2307 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:30:17.716111 kubelet[2307]: I0912 17:30:17.716098 2307 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:30:17.716393 kubelet[2307]: I0912 17:30:17.716366 2307 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:30:17.737993 kubelet[2307]: E0912 17:30:17.737936 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:17.739478 kubelet[2307]: I0912 17:30:17.739442 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:30:17.745504 kubelet[2307]: I0912 17:30:17.745475 2307 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:30:17.748706 kubelet[2307]: I0912 17:30:17.748686 2307 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:30:17.749337 kubelet[2307]: I0912 17:30:17.749289 2307 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:30:17.749520 kubelet[2307]: I0912 17:30:17.749335 2307 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:30:17.749623 kubelet[2307]: I0912 17:30:17.749615 2307 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:30:17.749646 kubelet[2307]: I0912 17:30:17.749627 2307 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:30:17.749836 kubelet[2307]: I0912 17:30:17.749821 2307 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:17.752198 kubelet[2307]: I0912 17:30:17.752145 2307 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:30:17.752198 kubelet[2307]: I0912 17:30:17.752179 2307 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:30:17.752198 kubelet[2307]: I0912 17:30:17.752204 2307 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:30:17.752361 kubelet[2307]: I0912 17:30:17.752215 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:30:17.753863 kubelet[2307]: W0912 17:30:17.753471 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 12 17:30:17.753863 kubelet[2307]: E0912 17:30:17.753553 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:17.754507 kubelet[2307]: I0912 17:30:17.754472 2307 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:30:17.755119 kubelet[2307]: I0912 17:30:17.755106 2307 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:30:17.755167 kubelet[2307]: W0912 17:30:17.755133 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 12 17:30:17.755203 kubelet[2307]: E0912 17:30:17.755180 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:17.755239 kubelet[2307]: W0912 17:30:17.755220 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:30:17.757378 kubelet[2307]: I0912 17:30:17.757344 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:30:17.757442 kubelet[2307]: I0912 17:30:17.757394 2307 server.go:1287] "Started kubelet" Sep 12 17:30:17.759197 kubelet[2307]: I0912 17:30:17.759132 2307 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:30:17.759688 kubelet[2307]: I0912 17:30:17.759623 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:30:17.760411 kubelet[2307]: I0912 17:30:17.760376 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:30:17.760990 kubelet[2307]: I0912 17:30:17.760962 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:30:17.762028 kubelet[2307]: I0912 17:30:17.761987 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:30:17.762104 kubelet[2307]: I0912 17:30:17.762072 2307 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:30:17.762643 kubelet[2307]: I0912 17:30:17.762622 2307 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:30:17.762870 kubelet[2307]: I0912 17:30:17.762839 2307 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:30:17.762969 kubelet[2307]: E0912 17:30:17.762937 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:17.763398 kubelet[2307]: W0912 17:30:17.763349 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 12 17:30:17.763470 kubelet[2307]: E0912 17:30:17.763401 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:17.763497 kubelet[2307]: E0912 17:30:17.763473 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Sep 12 17:30:17.764836 kubelet[2307]: I0912 17:30:17.763683 2307 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:30:17.764836 kubelet[2307]: I0912 17:30:17.763806 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:30:17.764836 kubelet[2307]: I0912 17:30:17.760855 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:30:17.765483 kubelet[2307]: E0912 17:30:17.765182 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864993763c3efa6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:30:17.757372326 +0000 UTC m=+0.918451642,LastTimestamp:2025-09-12 17:30:17.757372326 +0000 UTC m=+0.918451642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:30:17.766922 kubelet[2307]: I0912 17:30:17.766875 2307 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:30:17.768291 kubelet[2307]: E0912 17:30:17.768268 2307 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:30:17.778039 kubelet[2307]: I0912 17:30:17.778001 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:30:17.779260 kubelet[2307]: I0912 17:30:17.779066 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:30:17.779260 kubelet[2307]: I0912 17:30:17.779086 2307 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:30:17.779260 kubelet[2307]: I0912 17:30:17.779102 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:30:17.779260 kubelet[2307]: I0912 17:30:17.779108 2307 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:30:17.779260 kubelet[2307]: E0912 17:30:17.779143 2307 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:30:17.780381 kubelet[2307]: I0912 17:30:17.780362 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:30:17.780381 kubelet[2307]: I0912 17:30:17.780376 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:30:17.780475 kubelet[2307]: I0912 17:30:17.780403 2307 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:17.782209 kubelet[2307]: W0912 17:30:17.782161 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 12 17:30:17.782277 kubelet[2307]: E0912 17:30:17.782225 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:17.863104 kubelet[2307]: E0912 17:30:17.863063 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:17.867911 kubelet[2307]: I0912 17:30:17.867879 2307 policy_none.go:49] "None policy: Start" Sep 12 17:30:17.867911 kubelet[2307]: I0912 17:30:17.867908 2307 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:30:17.867977 kubelet[2307]: I0912 17:30:17.867925 2307 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:30:17.874514 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:30:17.879228 kubelet[2307]: E0912 17:30:17.879204 2307 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:30:17.889712 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:30:17.892876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:30:17.904451 kubelet[2307]: I0912 17:30:17.904406 2307 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:30:17.905178 kubelet[2307]: I0912 17:30:17.904658 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:30:17.905178 kubelet[2307]: I0912 17:30:17.904676 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:30:17.905178 kubelet[2307]: I0912 17:30:17.904925 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:30:17.905940 kubelet[2307]: E0912 17:30:17.905919 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:30:17.906010 kubelet[2307]: E0912 17:30:17.905962 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:30:17.964819 kubelet[2307]: E0912 17:30:17.964780 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Sep 12 17:30:18.009130 kubelet[2307]: I0912 17:30:18.008994 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:30:18.010603 kubelet[2307]: E0912 17:30:18.010565 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 12 17:30:18.089418 systemd[1]: Created slice kubepods-burstable-pod88d1a9247a446da970a5b698401d59a9.slice - libcontainer container kubepods-burstable-pod88d1a9247a446da970a5b698401d59a9.slice. Sep 12 17:30:18.099450 kubelet[2307]: E0912 17:30:18.099411 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:18.102980 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 17:30:18.125980 kubelet[2307]: E0912 17:30:18.125915 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:18.128748 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 17:30:18.130498 kubelet[2307]: E0912 17:30:18.130474 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:18.165273 kubelet[2307]: I0912 17:30:18.165220 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:18.165273 kubelet[2307]: I0912 17:30:18.165274 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:18.165410 kubelet[2307]: I0912 17:30:18.165314 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:18.165410 kubelet[2307]: I0912 17:30:18.165336 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:18.165410 kubelet[2307]: I0912 17:30:18.165359 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:18.165410 kubelet[2307]: I0912 17:30:18.165378 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:18.165410 kubelet[2307]: I0912 17:30:18.165395 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:18.165513 kubelet[2307]: I0912 17:30:18.165414 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:18.165513 kubelet[2307]: I0912 17:30:18.165435 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:18.211921 kubelet[2307]: I0912 17:30:18.211878 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:30:18.212327 kubelet[2307]: E0912 17:30:18.212300 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 12 17:30:18.365821 kubelet[2307]: E0912 17:30:18.365706 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Sep 12 17:30:18.400617 containerd[1514]: time="2025-09-12T17:30:18.400574430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88d1a9247a446da970a5b698401d59a9,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:18.419844 containerd[1514]: time="2025-09-12T17:30:18.419770954Z" level=info msg="connecting to shim 400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75" address="unix:///run/containerd/s/302f1987b51abc257302ca66987b9feee11fd0e59f16d9b58c2681c973dcaca7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:18.428726 containerd[1514]: time="2025-09-12T17:30:18.428658741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:18.432992 containerd[1514]: time="2025-09-12T17:30:18.432954581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:18.445754 systemd[1]: Started cri-containerd-400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75.scope - libcontainer container 400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75. Sep 12 17:30:18.463081 containerd[1514]: time="2025-09-12T17:30:18.462600627Z" level=info msg="connecting to shim 4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da" address="unix:///run/containerd/s/1ceb977c6964ce7641fbd203b159f7428bfc9314a421274c46f46318734aa96a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:18.471063 containerd[1514]: time="2025-09-12T17:30:18.471015256Z" level=info msg="connecting to shim ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9" address="unix:///run/containerd/s/eabc76726c185589d28abb3156514e8eee63d66107ece5b56aece6d74316943a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:18.490944 systemd[1]: Started cri-containerd-4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da.scope - libcontainer container 4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da. Sep 12 17:30:18.496854 systemd[1]: Started cri-containerd-ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9.scope - libcontainer container ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9. Sep 12 17:30:18.498424 containerd[1514]: time="2025-09-12T17:30:18.498382902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88d1a9247a446da970a5b698401d59a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75\"" Sep 12 17:30:18.504663 containerd[1514]: time="2025-09-12T17:30:18.504623332Z" level=info msg="CreateContainer within sandbox \"400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:30:18.514516 containerd[1514]: time="2025-09-12T17:30:18.514476884Z" level=info msg="Container 4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:18.522083 containerd[1514]: time="2025-09-12T17:30:18.522042690Z" level=info msg="CreateContainer within sandbox \"400d9879e1d537d1193ff0450b97ce0bb8707c337eca8607dca4fdd95f332e75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de\"" Sep 12 17:30:18.523207 containerd[1514]: time="2025-09-12T17:30:18.523172565Z" level=info msg="StartContainer for \"4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de\"" Sep 12 17:30:18.527067 containerd[1514]: time="2025-09-12T17:30:18.526999560Z" level=info msg="connecting to shim 4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de" address="unix:///run/containerd/s/302f1987b51abc257302ca66987b9feee11fd0e59f16d9b58c2681c973dcaca7" protocol=ttrpc version=3 Sep 12 17:30:18.534975 containerd[1514]: time="2025-09-12T17:30:18.534938996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da\"" Sep 12 17:30:18.537285 containerd[1514]: time="2025-09-12T17:30:18.537170947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9\"" Sep 12 17:30:18.539305 containerd[1514]: time="2025-09-12T17:30:18.539268064Z" level=info msg="CreateContainer within sandbox \"4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:30:18.540551 containerd[1514]: time="2025-09-12T17:30:18.540347776Z" level=info msg="CreateContainer within sandbox \"ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:30:18.549726 containerd[1514]: time="2025-09-12T17:30:18.549689070Z" level=info msg="Container f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:18.551122 containerd[1514]: time="2025-09-12T17:30:18.551091409Z" level=info msg="Container 4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:18.553740 systemd[1]: Started cri-containerd-4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de.scope - libcontainer container 4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de. Sep 12 17:30:18.556629 containerd[1514]: time="2025-09-12T17:30:18.556508736Z" level=info msg="CreateContainer within sandbox \"4d6a76c453559664cda532facc6dd58b3c55e24b4825cdab99aeb0405fe5b1da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0\"" Sep 12 17:30:18.557305 containerd[1514]: time="2025-09-12T17:30:18.557270852Z" level=info msg="StartContainer for \"f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0\"" Sep 12 17:30:18.559490 containerd[1514]: time="2025-09-12T17:30:18.559454557Z" level=info msg="CreateContainer within sandbox \"ab1f75b78a7ec969c85ee53b3fb65c783c8e4efe15cf983d57efb1fc7f0447c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196\"" Sep 12 17:30:18.559598 containerd[1514]: time="2025-09-12T17:30:18.559568583Z" level=info msg="connecting to shim f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0" address="unix:///run/containerd/s/1ceb977c6964ce7641fbd203b159f7428bfc9314a421274c46f46318734aa96a" protocol=ttrpc version=3 Sep 12 17:30:18.559913 containerd[1514]: time="2025-09-12T17:30:18.559885579Z" level=info msg="StartContainer for \"4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196\"" Sep 12 17:30:18.560872 containerd[1514]: time="2025-09-12T17:30:18.560835408Z" level=info msg="connecting to shim 4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196" address="unix:///run/containerd/s/eabc76726c185589d28abb3156514e8eee63d66107ece5b56aece6d74316943a" protocol=ttrpc version=3 Sep 12 17:30:18.579744 systemd[1]: Started cri-containerd-f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0.scope - libcontainer container f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0. Sep 12 17:30:18.583746 systemd[1]: Started cri-containerd-4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196.scope - libcontainer container 4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196. Sep 12 17:30:18.597463 containerd[1514]: time="2025-09-12T17:30:18.596696723Z" level=info msg="StartContainer for \"4777695c40f529e76837c09e01705e914251216533558b6f180bad941e3d80de\" returns successfully" Sep 12 17:30:18.614004 kubelet[2307]: I0912 17:30:18.613979 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:30:18.614710 kubelet[2307]: E0912 17:30:18.614679 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Sep 12 17:30:18.640306 containerd[1514]: time="2025-09-12T17:30:18.640019443Z" level=info msg="StartContainer for \"f29f1c41e58a8009a3c144658a2d46ebae31653de9d871d311b763c46d0a9eb0\" returns successfully" Sep 12 17:30:18.643997 containerd[1514]: time="2025-09-12T17:30:18.643962142Z" level=info msg="StartContainer for \"4687ca9ca83516d751fbe88c26ab3845a82ef4f0e10e0e9479a031e91a2e9196\" returns successfully" Sep 12 17:30:18.658290 kubelet[2307]: W0912 17:30:18.658177 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Sep 12 17:30:18.658290 kubelet[2307]: E0912 17:30:18.658257 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:30:18.785609 kubelet[2307]: E0912 17:30:18.785581 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:18.789735 kubelet[2307]: E0912 17:30:18.789569 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:18.791169 kubelet[2307]: E0912 17:30:18.791036 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:19.419802 kubelet[2307]: I0912 17:30:19.419628 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:30:19.794883 kubelet[2307]: E0912 17:30:19.794659 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:19.794968 kubelet[2307]: E0912 17:30:19.794947 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:20.374754 kubelet[2307]: E0912 17:30:20.374684 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:30:20.449005 kubelet[2307]: E0912 17:30:20.448884 2307 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1864993763c3efa6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:30:17.757372326 +0000 UTC m=+0.918451642,LastTimestamp:2025-09-12 17:30:17.757372326 +0000 UTC m=+0.918451642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:30:20.491982 kubelet[2307]: I0912 17:30:20.491945 2307 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:30:20.491982 kubelet[2307]: E0912 17:30:20.491988 2307 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:30:20.508641 kubelet[2307]: E0912 17:30:20.508510 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:20.609077 kubelet[2307]: E0912 17:30:20.609028 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:20.710048 kubelet[2307]: E0912 17:30:20.709578 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:20.795048 kubelet[2307]: E0912 17:30:20.794937 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:30:20.810247 kubelet[2307]: E0912 17:30:20.810209 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:20.911136 kubelet[2307]: E0912 17:30:20.911077 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:21.012022 kubelet[2307]: E0912 17:30:21.011831 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:21.062409 kubelet[2307]: I0912 17:30:21.062357 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:21.069425 kubelet[2307]: E0912 17:30:21.069389 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:21.069425 kubelet[2307]: I0912 17:30:21.069421 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:21.071581 kubelet[2307]: E0912 17:30:21.071383 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:21.071581 kubelet[2307]: I0912 17:30:21.071416 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:21.073188 kubelet[2307]: E0912 17:30:21.073162 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:21.755248 kubelet[2307]: I0912 17:30:21.755016 2307 apiserver.go:52] "Watching apiserver" Sep 12 17:30:21.762644 kubelet[2307]: I0912 17:30:21.762597 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:30:22.259649 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Sep 12 17:30:22.259665 systemd[1]: Reloading... Sep 12 17:30:22.323574 zram_generator::config[2628]: No configuration found. Sep 12 17:30:22.487388 systemd[1]: Reloading finished in 227 ms. Sep 12 17:30:22.515870 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:22.536522 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:30:22.536897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:22.536962 systemd[1]: kubelet.service: Consumed 1.303s CPU time, 128M memory peak. Sep 12 17:30:22.538786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:22.707166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:22.720886 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:30:22.771611 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:22.771611 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:30:22.771611 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:30:22.771611 kubelet[2670]: I0912 17:30:22.771329 2670 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:30:22.778106 kubelet[2670]: I0912 17:30:22.778068 2670 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:30:22.778279 kubelet[2670]: I0912 17:30:22.778269 2670 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:30:22.778665 kubelet[2670]: I0912 17:30:22.778645 2670 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:30:22.780063 kubelet[2670]: I0912 17:30:22.780032 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:30:22.783108 kubelet[2670]: I0912 17:30:22.782970 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:30:22.789943 kubelet[2670]: I0912 17:30:22.789915 2670 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:30:22.792639 kubelet[2670]: I0912 17:30:22.792615 2670 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:30:22.792835 kubelet[2670]: I0912 17:30:22.792811 2670 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:30:22.793000 kubelet[2670]: I0912 17:30:22.792837 2670 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:30:22.793070 kubelet[2670]: I0912 17:30:22.793009 2670 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:30:22.793070 kubelet[2670]: I0912 17:30:22.793021 2670 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:30:22.793070 kubelet[2670]: I0912 17:30:22.793070 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:22.793221 kubelet[2670]: I0912 17:30:22.793207 2670 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:30:22.793221 kubelet[2670]: I0912 17:30:22.793220 2670 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:30:22.793268 kubelet[2670]: I0912 17:30:22.793240 2670 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:30:22.793268 kubelet[2670]: I0912 17:30:22.793255 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:30:22.794628 kubelet[2670]: I0912 17:30:22.794595 2670 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:30:22.795068 kubelet[2670]: I0912 17:30:22.795044 2670 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:30:22.795594 kubelet[2670]: I0912 17:30:22.795570 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:30:22.795671 kubelet[2670]: I0912 17:30:22.795608 2670 server.go:1287] "Started kubelet" Sep 12 17:30:22.797333 kubelet[2670]: I0912 17:30:22.797262 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:30:22.797876 kubelet[2670]: I0912 17:30:22.797846 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:30:22.798826 kubelet[2670]: I0912 17:30:22.798706 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:30:22.799862 kubelet[2670]: I0912 17:30:22.799834 2670 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:30:22.800040 kubelet[2670]: I0912 17:30:22.800015 2670 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:30:22.803073 kubelet[2670]: I0912 17:30:22.803047 2670 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:30:22.806538 kubelet[2670]: I0912 17:30:22.803312 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:30:22.806746 kubelet[2670]: I0912 17:30:22.803332 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:30:22.806799 kubelet[2670]: E0912 17:30:22.803478 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:30:22.806850 kubelet[2670]: I0912 17:30:22.806612 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:30:22.807514 kubelet[2670]: I0912 17:30:22.807487 2670 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:30:22.808140 kubelet[2670]: I0912 17:30:22.808112 2670 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:30:22.808473 kubelet[2670]: I0912 17:30:22.808449 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:30:22.811656 kubelet[2670]: I0912 17:30:22.810650 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:30:22.811656 kubelet[2670]: I0912 17:30:22.810682 2670 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:30:22.811656 kubelet[2670]: I0912 17:30:22.810703 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:30:22.811656 kubelet[2670]: I0912 17:30:22.810709 2670 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:30:22.811656 kubelet[2670]: E0912 17:30:22.810752 2670 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:30:22.815555 kubelet[2670]: I0912 17:30:22.814736 2670 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:30:22.825864 kubelet[2670]: E0912 17:30:22.825828 2670 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:30:22.855877 kubelet[2670]: I0912 17:30:22.855843 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:30:22.855877 kubelet[2670]: I0912 17:30:22.855867 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:30:22.855877 kubelet[2670]: I0912 17:30:22.855889 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:30:22.856095 kubelet[2670]: I0912 17:30:22.856056 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:30:22.856095 kubelet[2670]: I0912 17:30:22.856072 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:30:22.856095 kubelet[2670]: I0912 17:30:22.856092 2670 policy_none.go:49] "None policy: Start" Sep 12 17:30:22.856095 kubelet[2670]: I0912 17:30:22.856100 2670 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:30:22.856186 kubelet[2670]: I0912 17:30:22.856110 2670 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:30:22.856233 kubelet[2670]: I0912 17:30:22.856217 2670 state_mem.go:75] "Updated machine memory state" Sep 12 17:30:22.864310 kubelet[2670]: I0912 17:30:22.864273 2670 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:30:22.864511 kubelet[2670]: I0912 17:30:22.864494 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:30:22.864694 kubelet[2670]: I0912 17:30:22.864510 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:30:22.865166 kubelet[2670]: I0912 17:30:22.864836 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:30:22.866763 kubelet[2670]: E0912 17:30:22.866737 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:30:22.912114 kubelet[2670]: I0912 17:30:22.912067 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:22.912114 kubelet[2670]: I0912 17:30:22.912088 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:22.912275 kubelet[2670]: I0912 17:30:22.912172 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:22.966255 kubelet[2670]: I0912 17:30:22.966228 2670 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:30:22.973599 kubelet[2670]: I0912 17:30:22.973560 2670 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:30:22.973803 kubelet[2670]: I0912 17:30:22.973648 2670 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:30:23.009066 kubelet[2670]: I0912 17:30:23.009014 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:23.009066 kubelet[2670]: I0912 17:30:23.009054 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:23.009230 kubelet[2670]: I0912 17:30:23.009080 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88d1a9247a446da970a5b698401d59a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88d1a9247a446da970a5b698401d59a9\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:23.009230 kubelet[2670]: I0912 17:30:23.009111 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:23.009230 kubelet[2670]: I0912 17:30:23.009125 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:23.009230 kubelet[2670]: I0912 17:30:23.009143 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:23.009230 kubelet[2670]: I0912 17:30:23.009159 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:23.009345 kubelet[2670]: I0912 17:30:23.009180 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:30:23.009345 kubelet[2670]: I0912 17:30:23.009195 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:23.793796 kubelet[2670]: I0912 17:30:23.793747 2670 apiserver.go:52] "Watching apiserver" Sep 12 17:30:23.807384 kubelet[2670]: I0912 17:30:23.807329 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:30:23.842132 kubelet[2670]: I0912 17:30:23.842086 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:23.842267 kubelet[2670]: I0912 17:30:23.842207 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:23.852631 kubelet[2670]: E0912 17:30:23.851341 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:30:23.853191 kubelet[2670]: E0912 17:30:23.853165 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:30:23.868290 kubelet[2670]: I0912 17:30:23.867812 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8677754420000001 podStartE2EDuration="1.867775442s" podCreationTimestamp="2025-09-12 17:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:23.86775035 +0000 UTC m=+1.141006219" watchObservedRunningTime="2025-09-12 17:30:23.867775442 +0000 UTC m=+1.141031271" Sep 12 17:30:23.887429 kubelet[2670]: I0912 17:30:23.887316 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.88729858 podStartE2EDuration="1.88729858s" podCreationTimestamp="2025-09-12 17:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:23.877511879 +0000 UTC m=+1.150767708" watchObservedRunningTime="2025-09-12 17:30:23.88729858 +0000 UTC m=+1.160554408" Sep 12 17:30:28.779606 kubelet[2670]: I0912 17:30:28.779466 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.779445542 podStartE2EDuration="6.779445542s" podCreationTimestamp="2025-09-12 17:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:23.888100453 +0000 UTC m=+1.161356322" watchObservedRunningTime="2025-09-12 17:30:28.779445542 +0000 UTC m=+6.052701371" Sep 12 17:30:28.841438 kubelet[2670]: I0912 17:30:28.841394 2670 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:30:28.841851 containerd[1514]: time="2025-09-12T17:30:28.841804314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:30:28.842131 kubelet[2670]: I0912 17:30:28.842035 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:30:29.791204 systemd[1]: Created slice kubepods-besteffort-pod11da6945_4641_456c_8e9d_d5652c7fe22e.slice - libcontainer container kubepods-besteffort-pod11da6945_4641_456c_8e9d_d5652c7fe22e.slice. Sep 12 17:30:29.853254 kubelet[2670]: I0912 17:30:29.853205 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11da6945-4641-456c-8e9d-d5652c7fe22e-kube-proxy\") pod \"kube-proxy-j4hpr\" (UID: \"11da6945-4641-456c-8e9d-d5652c7fe22e\") " pod="kube-system/kube-proxy-j4hpr" Sep 12 17:30:29.853826 kubelet[2670]: I0912 17:30:29.853433 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11da6945-4641-456c-8e9d-d5652c7fe22e-xtables-lock\") pod \"kube-proxy-j4hpr\" (UID: \"11da6945-4641-456c-8e9d-d5652c7fe22e\") " pod="kube-system/kube-proxy-j4hpr" Sep 12 17:30:29.853826 kubelet[2670]: I0912 17:30:29.853453 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11da6945-4641-456c-8e9d-d5652c7fe22e-lib-modules\") pod \"kube-proxy-j4hpr\" (UID: \"11da6945-4641-456c-8e9d-d5652c7fe22e\") " pod="kube-system/kube-proxy-j4hpr" Sep 12 17:30:29.853826 kubelet[2670]: I0912 17:30:29.853471 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stbtj\" (UniqueName: \"kubernetes.io/projected/11da6945-4641-456c-8e9d-d5652c7fe22e-kube-api-access-stbtj\") pod \"kube-proxy-j4hpr\" (UID: \"11da6945-4641-456c-8e9d-d5652c7fe22e\") " pod="kube-system/kube-proxy-j4hpr" Sep 12 17:30:29.960924 systemd[1]: Created slice kubepods-besteffort-pod9d0eb74e_2309_48ab_8066_5a78b22c023d.slice - libcontainer container kubepods-besteffort-pod9d0eb74e_2309_48ab_8066_5a78b22c023d.slice. Sep 12 17:30:30.055242 kubelet[2670]: I0912 17:30:30.054907 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8bs\" (UniqueName: \"kubernetes.io/projected/9d0eb74e-2309-48ab-8066-5a78b22c023d-kube-api-access-hq8bs\") pod \"tigera-operator-755d956888-pxmzb\" (UID: \"9d0eb74e-2309-48ab-8066-5a78b22c023d\") " pod="tigera-operator/tigera-operator-755d956888-pxmzb" Sep 12 17:30:30.055242 kubelet[2670]: I0912 17:30:30.054958 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d0eb74e-2309-48ab-8066-5a78b22c023d-var-lib-calico\") pod \"tigera-operator-755d956888-pxmzb\" (UID: \"9d0eb74e-2309-48ab-8066-5a78b22c023d\") " pod="tigera-operator/tigera-operator-755d956888-pxmzb" Sep 12 17:30:30.104304 containerd[1514]: time="2025-09-12T17:30:30.104258191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4hpr,Uid:11da6945-4641-456c-8e9d-d5652c7fe22e,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:30.128671 containerd[1514]: time="2025-09-12T17:30:30.128595354Z" level=info msg="connecting to shim a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f" address="unix:///run/containerd/s/dc94c30e6a1ca7ce51d6240eefa7900b10891a03a8c7f31845f58cafc5feb3bf" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:30.154739 systemd[1]: Started cri-containerd-a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f.scope - libcontainer container a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f. Sep 12 17:30:30.183654 containerd[1514]: time="2025-09-12T17:30:30.183608859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j4hpr,Uid:11da6945-4641-456c-8e9d-d5652c7fe22e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f\"" Sep 12 17:30:30.189160 containerd[1514]: time="2025-09-12T17:30:30.189116825Z" level=info msg="CreateContainer within sandbox \"a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:30:30.210957 containerd[1514]: time="2025-09-12T17:30:30.210906299Z" level=info msg="Container d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:30.218899 containerd[1514]: time="2025-09-12T17:30:30.218848993Z" level=info msg="CreateContainer within sandbox \"a63c091e48fd155328d240406bfd1a3df9525dcd2657f52eddb57d2876a65a3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c\"" Sep 12 17:30:30.219543 containerd[1514]: time="2025-09-12T17:30:30.219498416Z" level=info msg="StartContainer for \"d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c\"" Sep 12 17:30:30.221290 containerd[1514]: time="2025-09-12T17:30:30.221170320Z" level=info msg="connecting to shim d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c" address="unix:///run/containerd/s/dc94c30e6a1ca7ce51d6240eefa7900b10891a03a8c7f31845f58cafc5feb3bf" protocol=ttrpc version=3 Sep 12 17:30:30.244728 systemd[1]: Started cri-containerd-d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c.scope - libcontainer container d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c. Sep 12 17:30:30.265476 containerd[1514]: time="2025-09-12T17:30:30.265405926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pxmzb,Uid:9d0eb74e-2309-48ab-8066-5a78b22c023d,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:30:30.280348 containerd[1514]: time="2025-09-12T17:30:30.280290417Z" level=info msg="StartContainer for \"d09af675c2837c985a30247e8474144a168f6b320e5d385bbc300a90255efe0c\" returns successfully" Sep 12 17:30:30.300155 containerd[1514]: time="2025-09-12T17:30:30.300079858Z" level=info msg="connecting to shim cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454" address="unix:///run/containerd/s/50411221676957564bba1d12d7226484ebb90d6a5059f1a14833c709fe53565e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:30.341131 systemd[1]: Started cri-containerd-cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454.scope - libcontainer container cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454. Sep 12 17:30:30.373700 containerd[1514]: time="2025-09-12T17:30:30.373640638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pxmzb,Uid:9d0eb74e-2309-48ab-8066-5a78b22c023d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454\"" Sep 12 17:30:30.377060 containerd[1514]: time="2025-09-12T17:30:30.376810769Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:30:30.871759 kubelet[2670]: I0912 17:30:30.871605 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j4hpr" podStartSLOduration=1.8715856830000002 podStartE2EDuration="1.871585683s" podCreationTimestamp="2025-09-12 17:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:30:30.870598937 +0000 UTC m=+8.143854766" watchObservedRunningTime="2025-09-12 17:30:30.871585683 +0000 UTC m=+8.144841512" Sep 12 17:30:30.973586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860683649.mount: Deactivated successfully. Sep 12 17:30:31.731901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110640459.mount: Deactivated successfully. Sep 12 17:30:32.189349 containerd[1514]: time="2025-09-12T17:30:32.189286205Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:32.189710 containerd[1514]: time="2025-09-12T17:30:32.189663851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:30:32.190512 containerd[1514]: time="2025-09-12T17:30:32.190479827Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:32.192394 containerd[1514]: time="2025-09-12T17:30:32.192346353Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:32.193182 containerd[1514]: time="2025-09-12T17:30:32.193152175Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.816300715s" Sep 12 17:30:32.193236 containerd[1514]: time="2025-09-12T17:30:32.193188233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:30:32.195263 containerd[1514]: time="2025-09-12T17:30:32.195222335Z" level=info msg="CreateContainer within sandbox \"cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:30:32.201966 containerd[1514]: time="2025-09-12T17:30:32.201931988Z" level=info msg="Container 91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:32.208480 containerd[1514]: time="2025-09-12T17:30:32.208394073Z" level=info msg="CreateContainer within sandbox \"cee2ed03e0a3505b7659b6117bdcb83ad22be07f43d95eba4ee4facd773f6454\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46\"" Sep 12 17:30:32.208949 containerd[1514]: time="2025-09-12T17:30:32.208921228Z" level=info msg="StartContainer for \"91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46\"" Sep 12 17:30:32.210083 containerd[1514]: time="2025-09-12T17:30:32.210001680Z" level=info msg="connecting to shim 91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46" address="unix:///run/containerd/s/50411221676957564bba1d12d7226484ebb90d6a5059f1a14833c709fe53565e" protocol=ttrpc version=3 Sep 12 17:30:32.231722 systemd[1]: Started cri-containerd-91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46.scope - libcontainer container 91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46. Sep 12 17:30:32.262223 containerd[1514]: time="2025-09-12T17:30:32.262183344Z" level=info msg="StartContainer for \"91a90cdb8450b73e296232d536bc0cf3f90626ae8c51f289abcb493db9972e46\" returns successfully" Sep 12 17:30:32.874823 kubelet[2670]: I0912 17:30:32.874671 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-pxmzb" podStartSLOduration=2.055621363 podStartE2EDuration="3.874633487s" podCreationTimestamp="2025-09-12 17:30:29 +0000 UTC" firstStartedPulling="2025-09-12 17:30:30.375033738 +0000 UTC m=+7.648289567" lastFinishedPulling="2025-09-12 17:30:32.194045863 +0000 UTC m=+9.467301691" observedRunningTime="2025-09-12 17:30:32.874644201 +0000 UTC m=+10.147900030" watchObservedRunningTime="2025-09-12 17:30:32.874633487 +0000 UTC m=+10.147889316" Sep 12 17:30:36.497380 update_engine[1499]: I20250912 17:30:36.496635 1499 update_attempter.cc:509] Updating boot flags... Sep 12 17:30:37.693799 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 12 17:30:37.695710 sshd[1733]: Connection closed by 10.0.0.1 port 60998 Sep 12 17:30:37.697645 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:37.702069 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:30:37.702668 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:60998.service: Deactivated successfully. Sep 12 17:30:37.707092 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:30:37.708124 systemd[1]: session-7.scope: Consumed 7.849s CPU time, 217.8M memory peak. Sep 12 17:30:37.710987 systemd-logind[1496]: Removed session 7. Sep 12 17:30:41.971943 systemd[1]: Created slice kubepods-besteffort-podf4fa1145_e947_40a6_a36c_333d0234316e.slice - libcontainer container kubepods-besteffort-podf4fa1145_e947_40a6_a36c_333d0234316e.slice. Sep 12 17:30:42.031713 kubelet[2670]: I0912 17:30:42.031617 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4fa1145-e947-40a6-a36c-333d0234316e-tigera-ca-bundle\") pod \"calico-typha-5d4665b74d-5rvnb\" (UID: \"f4fa1145-e947-40a6-a36c-333d0234316e\") " pod="calico-system/calico-typha-5d4665b74d-5rvnb" Sep 12 17:30:42.032406 kubelet[2670]: I0912 17:30:42.031855 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm5jc\" (UniqueName: \"kubernetes.io/projected/f4fa1145-e947-40a6-a36c-333d0234316e-kube-api-access-jm5jc\") pod \"calico-typha-5d4665b74d-5rvnb\" (UID: \"f4fa1145-e947-40a6-a36c-333d0234316e\") " pod="calico-system/calico-typha-5d4665b74d-5rvnb" Sep 12 17:30:42.032764 kubelet[2670]: I0912 17:30:42.031883 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f4fa1145-e947-40a6-a36c-333d0234316e-typha-certs\") pod \"calico-typha-5d4665b74d-5rvnb\" (UID: \"f4fa1145-e947-40a6-a36c-333d0234316e\") " pod="calico-system/calico-typha-5d4665b74d-5rvnb" Sep 12 17:30:42.282734 containerd[1514]: time="2025-09-12T17:30:42.282682216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4665b74d-5rvnb,Uid:f4fa1145-e947-40a6-a36c-333d0234316e,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:42.391027 systemd[1]: Created slice kubepods-besteffort-pod03e10647_9a77_4f68_9e92_fd4a64b9f973.slice - libcontainer container kubepods-besteffort-pod03e10647_9a77_4f68_9e92_fd4a64b9f973.slice. Sep 12 17:30:42.395576 containerd[1514]: time="2025-09-12T17:30:42.395512489Z" level=info msg="connecting to shim abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e" address="unix:///run/containerd/s/748a4c1cf4cb3e28e9950c9570365d2d626b5013ac602f6beb46eaaa7605a3b6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:42.435563 kubelet[2670]: I0912 17:30:42.434676 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-flexvol-driver-host\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435563 kubelet[2670]: I0912 17:30:42.434723 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-policysync\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435563 kubelet[2670]: I0912 17:30:42.434742 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-var-run-calico\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435563 kubelet[2670]: I0912 17:30:42.434781 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/03e10647-9a77-4f68-9e92-fd4a64b9f973-node-certs\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435563 kubelet[2670]: I0912 17:30:42.434826 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-lib-modules\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435797 kubelet[2670]: I0912 17:30:42.434873 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03e10647-9a77-4f68-9e92-fd4a64b9f973-tigera-ca-bundle\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435797 kubelet[2670]: I0912 17:30:42.435010 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrzj\" (UniqueName: \"kubernetes.io/projected/03e10647-9a77-4f68-9e92-fd4a64b9f973-kube-api-access-hzrzj\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435797 kubelet[2670]: I0912 17:30:42.435037 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-cni-net-dir\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435797 kubelet[2670]: I0912 17:30:42.435086 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-xtables-lock\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435797 kubelet[2670]: I0912 17:30:42.435108 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-cni-bin-dir\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435896 kubelet[2670]: I0912 17:30:42.435122 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-cni-log-dir\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.435896 kubelet[2670]: I0912 17:30:42.435153 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03e10647-9a77-4f68-9e92-fd4a64b9f973-var-lib-calico\") pod \"calico-node-zxf2h\" (UID: \"03e10647-9a77-4f68-9e92-fd4a64b9f973\") " pod="calico-system/calico-node-zxf2h" Sep 12 17:30:42.465866 systemd[1]: Started cri-containerd-abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e.scope - libcontainer container abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e. Sep 12 17:30:42.498839 containerd[1514]: time="2025-09-12T17:30:42.498798055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4665b74d-5rvnb,Uid:f4fa1145-e947-40a6-a36c-333d0234316e,Namespace:calico-system,Attempt:0,} returns sandbox id \"abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e\"" Sep 12 17:30:42.505134 containerd[1514]: time="2025-09-12T17:30:42.504976132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:30:42.542670 kubelet[2670]: E0912 17:30:42.542580 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.542865 kubelet[2670]: W0912 17:30:42.542786 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.542865 kubelet[2670]: E0912 17:30:42.542826 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.547636 kubelet[2670]: E0912 17:30:42.547608 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.547636 kubelet[2670]: W0912 17:30:42.547629 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.547636 kubelet[2670]: E0912 17:30:42.547645 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.549947 kubelet[2670]: E0912 17:30:42.549926 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.550093 kubelet[2670]: W0912 17:30:42.550046 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.550093 kubelet[2670]: E0912 17:30:42.550066 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.600602 kubelet[2670]: E0912 17:30:42.598569 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gw4k" podUID="bc7fd84a-a166-4af2-8370-28006dcb2723" Sep 12 17:30:42.614764 kubelet[2670]: E0912 17:30:42.614736 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.615045 kubelet[2670]: W0912 17:30:42.614923 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.615045 kubelet[2670]: E0912 17:30:42.614952 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.615378 kubelet[2670]: E0912 17:30:42.615222 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.615378 kubelet[2670]: W0912 17:30:42.615234 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.615378 kubelet[2670]: E0912 17:30:42.615301 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.615559 kubelet[2670]: E0912 17:30:42.615545 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.615612 kubelet[2670]: W0912 17:30:42.615601 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.615681 kubelet[2670]: E0912 17:30:42.615669 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.615898 kubelet[2670]: E0912 17:30:42.615885 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.615968 kubelet[2670]: W0912 17:30:42.615957 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.616026 kubelet[2670]: E0912 17:30:42.616013 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.616326 kubelet[2670]: E0912 17:30:42.616238 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.616326 kubelet[2670]: W0912 17:30:42.616249 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.616326 kubelet[2670]: E0912 17:30:42.616259 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.616547 kubelet[2670]: E0912 17:30:42.616523 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.616716 kubelet[2670]: W0912 17:30:42.616613 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.616716 kubelet[2670]: E0912 17:30:42.616630 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.616875 kubelet[2670]: E0912 17:30:42.616863 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.616929 kubelet[2670]: W0912 17:30:42.616919 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.616990 kubelet[2670]: E0912 17:30:42.616979 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.617325 kubelet[2670]: E0912 17:30:42.617199 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.617325 kubelet[2670]: W0912 17:30:42.617224 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.617325 kubelet[2670]: E0912 17:30:42.617236 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.617478 kubelet[2670]: E0912 17:30:42.617454 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.617547 kubelet[2670]: W0912 17:30:42.617524 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.617618 kubelet[2670]: E0912 17:30:42.617604 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.617891 kubelet[2670]: E0912 17:30:42.617804 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.617891 kubelet[2670]: W0912 17:30:42.617816 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.617891 kubelet[2670]: E0912 17:30:42.617825 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.618035 kubelet[2670]: E0912 17:30:42.618023 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.618086 kubelet[2670]: W0912 17:30:42.618077 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.618149 kubelet[2670]: E0912 17:30:42.618134 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.618501 kubelet[2670]: E0912 17:30:42.618379 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.618501 kubelet[2670]: W0912 17:30:42.618389 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.618501 kubelet[2670]: E0912 17:30:42.618399 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.618680 kubelet[2670]: E0912 17:30:42.618666 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.618737 kubelet[2670]: W0912 17:30:42.618726 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.618806 kubelet[2670]: E0912 17:30:42.618788 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.619092 kubelet[2670]: E0912 17:30:42.618998 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.619092 kubelet[2670]: W0912 17:30:42.619012 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.619092 kubelet[2670]: E0912 17:30:42.619022 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.619240 kubelet[2670]: E0912 17:30:42.619228 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.619292 kubelet[2670]: W0912 17:30:42.619282 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.619353 kubelet[2670]: E0912 17:30:42.619341 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.619725 kubelet[2670]: E0912 17:30:42.619602 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.619725 kubelet[2670]: W0912 17:30:42.619615 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.619725 kubelet[2670]: E0912 17:30:42.619626 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.619893 kubelet[2670]: E0912 17:30:42.619879 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.619956 kubelet[2670]: W0912 17:30:42.619944 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.620013 kubelet[2670]: E0912 17:30:42.620002 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.620204 kubelet[2670]: E0912 17:30:42.620192 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.620264 kubelet[2670]: W0912 17:30:42.620253 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.620320 kubelet[2670]: E0912 17:30:42.620309 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.620633 kubelet[2670]: E0912 17:30:42.620506 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.620633 kubelet[2670]: W0912 17:30:42.620517 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.620633 kubelet[2670]: E0912 17:30:42.620552 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.620805 kubelet[2670]: E0912 17:30:42.620790 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.620940 kubelet[2670]: W0912 17:30:42.620851 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.620940 kubelet[2670]: E0912 17:30:42.620868 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.636826 kubelet[2670]: E0912 17:30:42.636798 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.636826 kubelet[2670]: W0912 17:30:42.636821 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.637054 kubelet[2670]: E0912 17:30:42.636840 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.637054 kubelet[2670]: I0912 17:30:42.636873 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bc7fd84a-a166-4af2-8370-28006dcb2723-registration-dir\") pod \"csi-node-driver-9gw4k\" (UID: \"bc7fd84a-a166-4af2-8370-28006dcb2723\") " pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:42.637127 kubelet[2670]: E0912 17:30:42.637071 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.637127 kubelet[2670]: W0912 17:30:42.637083 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.637127 kubelet[2670]: E0912 17:30:42.637098 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.637127 kubelet[2670]: I0912 17:30:42.637113 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bc7fd84a-a166-4af2-8370-28006dcb2723-socket-dir\") pod \"csi-node-driver-9gw4k\" (UID: \"bc7fd84a-a166-4af2-8370-28006dcb2723\") " pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:42.637473 kubelet[2670]: E0912 17:30:42.637258 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.637473 kubelet[2670]: W0912 17:30:42.637266 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.637473 kubelet[2670]: E0912 17:30:42.637279 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.637473 kubelet[2670]: I0912 17:30:42.637293 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bc7fd84a-a166-4af2-8370-28006dcb2723-varrun\") pod \"csi-node-driver-9gw4k\" (UID: \"bc7fd84a-a166-4af2-8370-28006dcb2723\") " pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:42.637664 kubelet[2670]: E0912 17:30:42.637646 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.637740 kubelet[2670]: W0912 17:30:42.637726 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.637821 kubelet[2670]: E0912 17:30:42.637810 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.638136 kubelet[2670]: E0912 17:30:42.638038 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.638136 kubelet[2670]: W0912 17:30:42.638051 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.638136 kubelet[2670]: E0912 17:30:42.638065 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.638328 kubelet[2670]: E0912 17:30:42.638314 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.638392 kubelet[2670]: W0912 17:30:42.638380 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.638449 kubelet[2670]: E0912 17:30:42.638439 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.638660 kubelet[2670]: I0912 17:30:42.638644 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc7fd84a-a166-4af2-8370-28006dcb2723-kubelet-dir\") pod \"csi-node-driver-9gw4k\" (UID: \"bc7fd84a-a166-4af2-8370-28006dcb2723\") " pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:42.638789 kubelet[2670]: E0912 17:30:42.638777 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.638840 kubelet[2670]: W0912 17:30:42.638831 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.638952 kubelet[2670]: E0912 17:30:42.638895 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.639180 kubelet[2670]: E0912 17:30:42.639165 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.639265 kubelet[2670]: W0912 17:30:42.639247 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.639353 kubelet[2670]: E0912 17:30:42.639335 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.639618 kubelet[2670]: E0912 17:30:42.639604 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.639801 kubelet[2670]: W0912 17:30:42.639701 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.639801 kubelet[2670]: E0912 17:30:42.639740 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.640106 kubelet[2670]: E0912 17:30:42.640076 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.640106 kubelet[2670]: W0912 17:30:42.640091 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.640248 kubelet[2670]: E0912 17:30:42.640198 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.640248 kubelet[2670]: I0912 17:30:42.640224 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzfqp\" (UniqueName: \"kubernetes.io/projected/bc7fd84a-a166-4af2-8370-28006dcb2723-kube-api-access-xzfqp\") pod \"csi-node-driver-9gw4k\" (UID: \"bc7fd84a-a166-4af2-8370-28006dcb2723\") " pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:42.640553 kubelet[2670]: E0912 17:30:42.640473 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.640553 kubelet[2670]: W0912 17:30:42.640488 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.640553 kubelet[2670]: E0912 17:30:42.640501 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.640856 kubelet[2670]: E0912 17:30:42.640817 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.640856 kubelet[2670]: W0912 17:30:42.640830 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.640856 kubelet[2670]: E0912 17:30:42.640841 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.641148 kubelet[2670]: E0912 17:30:42.641112 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.641148 kubelet[2670]: W0912 17:30:42.641125 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.641148 kubelet[2670]: E0912 17:30:42.641134 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.641578 kubelet[2670]: E0912 17:30:42.641473 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.641578 kubelet[2670]: W0912 17:30:42.641490 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.641578 kubelet[2670]: E0912 17:30:42.641502 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.641877 kubelet[2670]: E0912 17:30:42.641858 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.641982 kubelet[2670]: W0912 17:30:42.641963 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.642061 kubelet[2670]: E0912 17:30:42.642047 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.695083 containerd[1514]: time="2025-09-12T17:30:42.695027978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxf2h,Uid:03e10647-9a77-4f68-9e92-fd4a64b9f973,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:42.720398 containerd[1514]: time="2025-09-12T17:30:42.720337855Z" level=info msg="connecting to shim 4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599" address="unix:///run/containerd/s/df9e6381352c7841c833b414a66bd4633fe4b444b0a313545055e81212b7bb42" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:42.742894 kubelet[2670]: E0912 17:30:42.742740 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.742894 kubelet[2670]: W0912 17:30:42.742765 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.742894 kubelet[2670]: E0912 17:30:42.742784 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.743167 kubelet[2670]: E0912 17:30:42.743154 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.743308 kubelet[2670]: W0912 17:30:42.743217 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.743308 kubelet[2670]: E0912 17:30:42.743246 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.743669 kubelet[2670]: E0912 17:30:42.743571 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.743669 kubelet[2670]: W0912 17:30:42.743586 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.743669 kubelet[2670]: E0912 17:30:42.743604 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.744030 kubelet[2670]: E0912 17:30:42.743936 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.744030 kubelet[2670]: W0912 17:30:42.743950 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.744030 kubelet[2670]: E0912 17:30:42.743968 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.744275 kubelet[2670]: E0912 17:30:42.744263 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.744345 kubelet[2670]: W0912 17:30:42.744333 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.744435 kubelet[2670]: E0912 17:30:42.744406 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.744669 kubelet[2670]: E0912 17:30:42.744613 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.744669 kubelet[2670]: W0912 17:30:42.744626 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.744669 kubelet[2670]: E0912 17:30:42.744655 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.744932 kubelet[2670]: E0912 17:30:42.744878 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.744932 kubelet[2670]: W0912 17:30:42.744890 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.744932 kubelet[2670]: E0912 17:30:42.744917 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.745218 kubelet[2670]: E0912 17:30:42.745163 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.745218 kubelet[2670]: W0912 17:30:42.745176 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.745218 kubelet[2670]: E0912 17:30:42.745198 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.745463 kubelet[2670]: E0912 17:30:42.745451 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.745559 kubelet[2670]: W0912 17:30:42.745525 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.745649 kubelet[2670]: E0912 17:30:42.745632 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.745846 kubelet[2670]: E0912 17:30:42.745832 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.745932 kubelet[2670]: W0912 17:30:42.745920 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.746005 kubelet[2670]: E0912 17:30:42.745989 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.746190 kubelet[2670]: E0912 17:30:42.746179 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.746265 kubelet[2670]: W0912 17:30:42.746251 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.746339 kubelet[2670]: E0912 17:30:42.746322 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.746543 kubelet[2670]: E0912 17:30:42.746518 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.746625 kubelet[2670]: W0912 17:30:42.746612 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.746695 kubelet[2670]: E0912 17:30:42.746679 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.746889 kubelet[2670]: E0912 17:30:42.746876 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.747012 kubelet[2670]: W0912 17:30:42.746951 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.747012 kubelet[2670]: E0912 17:30:42.746981 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.747201 kubelet[2670]: E0912 17:30:42.747188 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.747269 kubelet[2670]: W0912 17:30:42.747259 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.747336 kubelet[2670]: E0912 17:30:42.747321 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.747544 kubelet[2670]: E0912 17:30:42.747519 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.747603 kubelet[2670]: W0912 17:30:42.747582 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.747644 kubelet[2670]: E0912 17:30:42.747617 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.747842 kubelet[2670]: E0912 17:30:42.747824 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.747842 kubelet[2670]: W0912 17:30:42.747840 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.747906 kubelet[2670]: E0912 17:30:42.747861 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.748017 kubelet[2670]: E0912 17:30:42.748004 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.748017 kubelet[2670]: W0912 17:30:42.748017 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.748065 kubelet[2670]: E0912 17:30:42.748037 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.748169 kubelet[2670]: E0912 17:30:42.748154 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.748169 kubelet[2670]: W0912 17:30:42.748164 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.748221 kubelet[2670]: E0912 17:30:42.748182 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.748295 kubelet[2670]: E0912 17:30:42.748284 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.748295 kubelet[2670]: W0912 17:30:42.748293 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.748350 kubelet[2670]: E0912 17:30:42.748312 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.748577 kubelet[2670]: E0912 17:30:42.748563 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.748577 kubelet[2670]: W0912 17:30:42.748575 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.748633 kubelet[2670]: E0912 17:30:42.748596 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.748777 kubelet[2670]: E0912 17:30:42.748765 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.748777 kubelet[2670]: W0912 17:30:42.748776 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.748830 kubelet[2670]: E0912 17:30:42.748793 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.749114 kubelet[2670]: E0912 17:30:42.749100 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.749114 kubelet[2670]: W0912 17:30:42.749114 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.749170 kubelet[2670]: E0912 17:30:42.749127 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.749303 kubelet[2670]: E0912 17:30:42.749292 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.749303 kubelet[2670]: W0912 17:30:42.749302 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.749357 kubelet[2670]: E0912 17:30:42.749321 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.749519 kubelet[2670]: E0912 17:30:42.749505 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.749519 kubelet[2670]: W0912 17:30:42.749517 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.749588 kubelet[2670]: E0912 17:30:42.749546 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.749935 kubelet[2670]: E0912 17:30:42.749914 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.749935 kubelet[2670]: W0912 17:30:42.749934 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.749996 kubelet[2670]: E0912 17:30:42.749949 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.750761 systemd[1]: Started cri-containerd-4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599.scope - libcontainer container 4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599. Sep 12 17:30:42.760737 kubelet[2670]: E0912 17:30:42.760710 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:42.760737 kubelet[2670]: W0912 17:30:42.760730 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:42.760865 kubelet[2670]: E0912 17:30:42.760752 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:42.780282 containerd[1514]: time="2025-09-12T17:30:42.780244720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxf2h,Uid:03e10647-9a77-4f68-9e92-fd4a64b9f973,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\"" Sep 12 17:30:43.594670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827762750.mount: Deactivated successfully. Sep 12 17:30:44.154686 containerd[1514]: time="2025-09-12T17:30:44.154626784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:44.155761 containerd[1514]: time="2025-09-12T17:30:44.155725271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:30:44.156825 containerd[1514]: time="2025-09-12T17:30:44.156795726Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:44.158730 containerd[1514]: time="2025-09-12T17:30:44.158680989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:44.159404 containerd[1514]: time="2025-09-12T17:30:44.159354438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.654336838s" Sep 12 17:30:44.159404 containerd[1514]: time="2025-09-12T17:30:44.159395466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:30:44.161294 containerd[1514]: time="2025-09-12T17:30:44.161261574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:30:44.182706 containerd[1514]: time="2025-09-12T17:30:44.182659519Z" level=info msg="CreateContainer within sandbox \"abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:30:44.193338 containerd[1514]: time="2025-09-12T17:30:44.192753564Z" level=info msg="Container 7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:44.200627 containerd[1514]: time="2025-09-12T17:30:44.200589092Z" level=info msg="CreateContainer within sandbox \"abb050652926494f28680946ab5b3fb09c328ae615e567fb881d7107d010240e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95\"" Sep 12 17:30:44.201482 containerd[1514]: time="2025-09-12T17:30:44.201440690Z" level=info msg="StartContainer for \"7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95\"" Sep 12 17:30:44.202897 containerd[1514]: time="2025-09-12T17:30:44.202873082Z" level=info msg="connecting to shim 7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95" address="unix:///run/containerd/s/748a4c1cf4cb3e28e9950c9570365d2d626b5013ac602f6beb46eaaa7605a3b6" protocol=ttrpc version=3 Sep 12 17:30:44.225718 systemd[1]: Started cri-containerd-7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95.scope - libcontainer container 7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95. Sep 12 17:30:44.263056 containerd[1514]: time="2025-09-12T17:30:44.263007873Z" level=info msg="StartContainer for \"7f49d1ef165e782f3fc758e25b30ca25dbcd2eb72715f2833c86cdc684c9fe95\" returns successfully" Sep 12 17:30:44.815443 kubelet[2670]: E0912 17:30:44.815112 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gw4k" podUID="bc7fd84a-a166-4af2-8370-28006dcb2723" Sep 12 17:30:44.923805 kubelet[2670]: I0912 17:30:44.923608 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d4665b74d-5rvnb" podStartSLOduration=2.266818689 podStartE2EDuration="3.923586437s" podCreationTimestamp="2025-09-12 17:30:41 +0000 UTC" firstStartedPulling="2025-09-12 17:30:42.50405787 +0000 UTC m=+19.777313699" lastFinishedPulling="2025-09-12 17:30:44.160825658 +0000 UTC m=+21.434081447" observedRunningTime="2025-09-12 17:30:44.919242994 +0000 UTC m=+22.192498863" watchObservedRunningTime="2025-09-12 17:30:44.923586437 +0000 UTC m=+22.196842266" Sep 12 17:30:44.935220 kubelet[2670]: E0912 17:30:44.935030 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.935220 kubelet[2670]: W0912 17:30:44.935055 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.935220 kubelet[2670]: E0912 17:30:44.935075 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.936138 kubelet[2670]: E0912 17:30:44.935842 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.936138 kubelet[2670]: W0912 17:30:44.935967 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.936138 kubelet[2670]: E0912 17:30:44.936015 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.936675 kubelet[2670]: E0912 17:30:44.936657 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.936856 kubelet[2670]: W0912 17:30:44.936843 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.936985 kubelet[2670]: E0912 17:30:44.936920 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.937592 kubelet[2670]: E0912 17:30:44.937422 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.937592 kubelet[2670]: W0912 17:30:44.937517 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.937592 kubelet[2670]: E0912 17:30:44.937541 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.938241 kubelet[2670]: E0912 17:30:44.938105 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.938241 kubelet[2670]: W0912 17:30:44.938117 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.938241 kubelet[2670]: E0912 17:30:44.938128 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.938870 kubelet[2670]: E0912 17:30:44.938711 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.938870 kubelet[2670]: W0912 17:30:44.938820 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.938870 kubelet[2670]: E0912 17:30:44.938833 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.939617 kubelet[2670]: E0912 17:30:44.939502 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.939617 kubelet[2670]: W0912 17:30:44.939514 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.940042 kubelet[2670]: E0912 17:30:44.939526 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.941828 kubelet[2670]: E0912 17:30:44.941673 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.942101 kubelet[2670]: W0912 17:30:44.941984 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.942268 kubelet[2670]: E0912 17:30:44.942174 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.944618 kubelet[2670]: E0912 17:30:44.944399 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.944618 kubelet[2670]: W0912 17:30:44.944415 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.944618 kubelet[2670]: E0912 17:30:44.944426 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.946393 kubelet[2670]: E0912 17:30:44.946327 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.946393 kubelet[2670]: W0912 17:30:44.946341 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.946393 kubelet[2670]: E0912 17:30:44.946353 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.946724 kubelet[2670]: E0912 17:30:44.946710 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.946841 kubelet[2670]: W0912 17:30:44.946827 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.946897 kubelet[2670]: E0912 17:30:44.946887 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.947165 kubelet[2670]: E0912 17:30:44.947153 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.947227 kubelet[2670]: W0912 17:30:44.947217 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.947274 kubelet[2670]: E0912 17:30:44.947265 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.947556 kubelet[2670]: E0912 17:30:44.947544 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.947721 kubelet[2670]: W0912 17:30:44.947617 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.947721 kubelet[2670]: E0912 17:30:44.947632 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.947950 kubelet[2670]: E0912 17:30:44.947938 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.948053 kubelet[2670]: W0912 17:30:44.948039 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.948112 kubelet[2670]: E0912 17:30:44.948100 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.948542 kubelet[2670]: E0912 17:30:44.948514 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.948707 kubelet[2670]: W0912 17:30:44.948623 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.948707 kubelet[2670]: E0912 17:30:44.948644 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.962081 kubelet[2670]: E0912 17:30:44.962054 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.962447 kubelet[2670]: W0912 17:30:44.962175 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.962447 kubelet[2670]: E0912 17:30:44.962199 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.962984 kubelet[2670]: E0912 17:30:44.962833 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.962984 kubelet[2670]: W0912 17:30:44.962847 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.962984 kubelet[2670]: E0912 17:30:44.962866 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.963160 kubelet[2670]: E0912 17:30:44.963147 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.963214 kubelet[2670]: W0912 17:30:44.963204 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.963284 kubelet[2670]: E0912 17:30:44.963273 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.963636 kubelet[2670]: E0912 17:30:44.963597 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.963636 kubelet[2670]: W0912 17:30:44.963632 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.963726 kubelet[2670]: E0912 17:30:44.963654 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.963886 kubelet[2670]: E0912 17:30:44.963862 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.963886 kubelet[2670]: W0912 17:30:44.963883 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.963972 kubelet[2670]: E0912 17:30:44.963899 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.964182 kubelet[2670]: E0912 17:30:44.964120 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.964210 kubelet[2670]: W0912 17:30:44.964184 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.964210 kubelet[2670]: E0912 17:30:44.964205 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.964426 kubelet[2670]: E0912 17:30:44.964415 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.964426 kubelet[2670]: W0912 17:30:44.964426 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.964482 kubelet[2670]: E0912 17:30:44.964456 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.964594 kubelet[2670]: E0912 17:30:44.964584 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.964671 kubelet[2670]: W0912 17:30:44.964594 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.964810 kubelet[2670]: E0912 17:30:44.964721 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.964914 kubelet[2670]: E0912 17:30:44.964899 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.964914 kubelet[2670]: W0912 17:30:44.964912 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.964972 kubelet[2670]: E0912 17:30:44.964930 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.965973 kubelet[2670]: E0912 17:30:44.965956 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.965973 kubelet[2670]: W0912 17:30:44.965971 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.966052 kubelet[2670]: E0912 17:30:44.965987 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.966688 kubelet[2670]: E0912 17:30:44.966613 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.966688 kubelet[2670]: W0912 17:30:44.966629 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.966842 kubelet[2670]: E0912 17:30:44.966814 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.967168 kubelet[2670]: E0912 17:30:44.967153 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.967194 kubelet[2670]: W0912 17:30:44.967167 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.967239 kubelet[2670]: E0912 17:30:44.967219 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.967393 kubelet[2670]: E0912 17:30:44.967379 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.967393 kubelet[2670]: W0912 17:30:44.967392 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.967457 kubelet[2670]: E0912 17:30:44.967436 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.967706 kubelet[2670]: E0912 17:30:44.967638 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.967706 kubelet[2670]: W0912 17:30:44.967648 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.967706 kubelet[2670]: E0912 17:30:44.967663 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.968012 kubelet[2670]: E0912 17:30:44.967996 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.968051 kubelet[2670]: W0912 17:30:44.968013 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.968051 kubelet[2670]: E0912 17:30:44.968031 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.968263 kubelet[2670]: E0912 17:30:44.968250 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.968263 kubelet[2670]: W0912 17:30:44.968262 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.968318 kubelet[2670]: E0912 17:30:44.968278 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.968874 kubelet[2670]: E0912 17:30:44.968720 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.968874 kubelet[2670]: W0912 17:30:44.968735 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.968874 kubelet[2670]: E0912 17:30:44.968760 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:44.969690 kubelet[2670]: E0912 17:30:44.969674 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:30:44.969810 kubelet[2670]: W0912 17:30:44.969795 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:30:44.969866 kubelet[2670]: E0912 17:30:44.969856 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:30:45.429443 containerd[1514]: time="2025-09-12T17:30:45.429385676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:45.431223 containerd[1514]: time="2025-09-12T17:30:45.431027677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:30:45.432073 containerd[1514]: time="2025-09-12T17:30:45.432034248Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:45.434011 containerd[1514]: time="2025-09-12T17:30:45.433981248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:45.434487 containerd[1514]: time="2025-09-12T17:30:45.434452163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.273147481s" Sep 12 17:30:45.434538 containerd[1514]: time="2025-09-12T17:30:45.434487833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:30:45.436446 containerd[1514]: time="2025-09-12T17:30:45.436418278Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:30:45.464586 containerd[1514]: time="2025-09-12T17:30:45.464523093Z" level=info msg="Container 775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:45.482357 containerd[1514]: time="2025-09-12T17:30:45.482234523Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\"" Sep 12 17:30:45.482883 containerd[1514]: time="2025-09-12T17:30:45.482853638Z" level=info msg="StartContainer for \"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\"" Sep 12 17:30:45.484238 containerd[1514]: time="2025-09-12T17:30:45.484204157Z" level=info msg="connecting to shim 775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4" address="unix:///run/containerd/s/df9e6381352c7841c833b414a66bd4633fe4b444b0a313545055e81212b7bb42" protocol=ttrpc version=3 Sep 12 17:30:45.511779 systemd[1]: Started cri-containerd-775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4.scope - libcontainer container 775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4. Sep 12 17:30:45.573700 containerd[1514]: time="2025-09-12T17:30:45.573071467Z" level=info msg="StartContainer for \"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\" returns successfully" Sep 12 17:30:45.590842 systemd[1]: cri-containerd-775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4.scope: Deactivated successfully. Sep 12 17:30:45.608718 containerd[1514]: time="2025-09-12T17:30:45.608645488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\" id:\"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\" pid:3380 exited_at:{seconds:1757698245 nanos:607929199}" Sep 12 17:30:45.613060 containerd[1514]: time="2025-09-12T17:30:45.613015201Z" level=info msg="received exit event container_id:\"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\" id:\"775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4\" pid:3380 exited_at:{seconds:1757698245 nanos:607929199}" Sep 12 17:30:45.674495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-775c26f554ac978d350408047c3ab24cb1ba2706b2630a4c28bc99c97bd677c4-rootfs.mount: Deactivated successfully. Sep 12 17:30:45.908385 kubelet[2670]: I0912 17:30:45.908328 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:30:45.909676 kubelet[2670]: E0912 17:30:45.909433 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:45.910574 containerd[1514]: time="2025-09-12T17:30:45.910484528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:30:46.811719 kubelet[2670]: E0912 17:30:46.811653 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gw4k" podUID="bc7fd84a-a166-4af2-8370-28006dcb2723" Sep 12 17:30:48.823824 kubelet[2670]: E0912 17:30:48.823698 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gw4k" podUID="bc7fd84a-a166-4af2-8370-28006dcb2723" Sep 12 17:30:49.760490 containerd[1514]: time="2025-09-12T17:30:49.760426846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:49.761557 containerd[1514]: time="2025-09-12T17:30:49.761511542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:30:49.762482 containerd[1514]: time="2025-09-12T17:30:49.762448509Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:49.764886 containerd[1514]: time="2025-09-12T17:30:49.764857972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:49.765812 containerd[1514]: time="2025-09-12T17:30:49.765771904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.855226751s" Sep 12 17:30:49.765812 containerd[1514]: time="2025-09-12T17:30:49.765805417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:30:49.769367 containerd[1514]: time="2025-09-12T17:30:49.769291818Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:30:49.779572 containerd[1514]: time="2025-09-12T17:30:49.778847926Z" level=info msg="Container 10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:49.786370 containerd[1514]: time="2025-09-12T17:30:49.786323505Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\"" Sep 12 17:30:49.786853 containerd[1514]: time="2025-09-12T17:30:49.786827921Z" level=info msg="StartContainer for \"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\"" Sep 12 17:30:49.788566 containerd[1514]: time="2025-09-12T17:30:49.788265664Z" level=info msg="connecting to shim 10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb" address="unix:///run/containerd/s/df9e6381352c7841c833b414a66bd4633fe4b444b0a313545055e81212b7bb42" protocol=ttrpc version=3 Sep 12 17:30:49.808714 systemd[1]: Started cri-containerd-10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb.scope - libcontainer container 10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb. Sep 12 17:30:49.885866 containerd[1514]: time="2025-09-12T17:30:49.885825981Z" level=info msg="StartContainer for \"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\" returns successfully" Sep 12 17:30:50.467380 systemd[1]: cri-containerd-10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb.scope: Deactivated successfully. Sep 12 17:30:50.468352 systemd[1]: cri-containerd-10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb.scope: Consumed 449ms CPU time, 174.7M memory peak, 2.4M read from disk, 165.8M written to disk. Sep 12 17:30:50.479690 containerd[1514]: time="2025-09-12T17:30:50.479632907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\" id:\"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\" pid:3441 exited_at:{seconds:1757698250 nanos:479134443}" Sep 12 17:30:50.479846 containerd[1514]: time="2025-09-12T17:30:50.479731728Z" level=info msg="received exit event container_id:\"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\" id:\"10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb\" pid:3441 exited_at:{seconds:1757698250 nanos:479134443}" Sep 12 17:30:50.485912 kubelet[2670]: I0912 17:30:50.485882 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:30:50.506629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ac257b6490bf65ca0934e6fbc1d7a0ccdc045fcb2567cccc107c1e7b696feb-rootfs.mount: Deactivated successfully. Sep 12 17:30:50.550054 systemd[1]: Created slice kubepods-burstable-podffa23418_ac3c_4687_bab6_a901c353ead3.slice - libcontainer container kubepods-burstable-podffa23418_ac3c_4687_bab6_a901c353ead3.slice. Sep 12 17:30:50.564456 systemd[1]: Created slice kubepods-besteffort-pod943bb34e_492e_4928_829b_cfc792fd8d25.slice - libcontainer container kubepods-besteffort-pod943bb34e_492e_4928_829b_cfc792fd8d25.slice. Sep 12 17:30:50.572305 systemd[1]: Created slice kubepods-burstable-pod2a5e96d8_a735_4e2c_b56d_4367bb835a56.slice - libcontainer container kubepods-burstable-pod2a5e96d8_a735_4e2c_b56d_4367bb835a56.slice. Sep 12 17:30:50.597992 systemd[1]: Created slice kubepods-besteffort-pod45118968_6c3d_4888_bb4b_65a6e25541e3.slice - libcontainer container kubepods-besteffort-pod45118968_6c3d_4888_bb4b_65a6e25541e3.slice. Sep 12 17:30:50.605337 systemd[1]: Created slice kubepods-besteffort-pod08410efb_654b_423d_9c11_9c6f67a8af4a.slice - libcontainer container kubepods-besteffort-pod08410efb_654b_423d_9c11_9c6f67a8af4a.slice. Sep 12 17:30:50.605737 kubelet[2670]: I0912 17:30:50.605701 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68z2m\" (UniqueName: \"kubernetes.io/projected/45118968-6c3d-4888-bb4b-65a6e25541e3-kube-api-access-68z2m\") pod \"calico-kube-controllers-75597c4f66-6cgkl\" (UID: \"45118968-6c3d-4888-bb4b-65a6e25541e3\") " pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" Sep 12 17:30:50.605737 kubelet[2670]: I0912 17:30:50.605790 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-backend-key-pair\") pod \"whisker-7449768bbd-nzjr5\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " pod="calico-system/whisker-7449768bbd-nzjr5" Sep 12 17:30:50.605737 kubelet[2670]: I0912 17:30:50.605817 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-ca-bundle\") pod \"whisker-7449768bbd-nzjr5\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " pod="calico-system/whisker-7449768bbd-nzjr5" Sep 12 17:30:50.605737 kubelet[2670]: I0912 17:30:50.605835 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbrt\" (UniqueName: \"kubernetes.io/projected/2a5e96d8-a735-4e2c-b56d-4367bb835a56-kube-api-access-csbrt\") pod \"coredns-668d6bf9bc-6clxf\" (UID: \"2a5e96d8-a735-4e2c-b56d-4367bb835a56\") " pod="kube-system/coredns-668d6bf9bc-6clxf" Sep 12 17:30:50.606039 kubelet[2670]: I0912 17:30:50.605864 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45118968-6c3d-4888-bb4b-65a6e25541e3-tigera-ca-bundle\") pod \"calico-kube-controllers-75597c4f66-6cgkl\" (UID: \"45118968-6c3d-4888-bb4b-65a6e25541e3\") " pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" Sep 12 17:30:50.606039 kubelet[2670]: I0912 17:30:50.605881 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4abd5bc0-e420-4f4a-8dbb-8520b88f50a2-calico-apiserver-certs\") pod \"calico-apiserver-557c55577d-vhmbk\" (UID: \"4abd5bc0-e420-4f4a-8dbb-8520b88f50a2\") " pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" Sep 12 17:30:50.606039 kubelet[2670]: I0912 17:30:50.605897 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgqz\" (UniqueName: \"kubernetes.io/projected/08410efb-654b-423d-9c11-9c6f67a8af4a-kube-api-access-2qgqz\") pod \"calico-apiserver-557c55577d-bkq8q\" (UID: \"08410efb-654b-423d-9c11-9c6f67a8af4a\") " pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" Sep 12 17:30:50.606039 kubelet[2670]: I0912 17:30:50.605915 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/46a5a404-015e-432b-aefc-2a536cc9a9bb-goldmane-key-pair\") pod \"goldmane-54d579b49d-d9d6j\" (UID: \"46a5a404-015e-432b-aefc-2a536cc9a9bb\") " pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:50.606039 kubelet[2670]: I0912 17:30:50.605932 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhvz\" (UniqueName: \"kubernetes.io/projected/46a5a404-015e-432b-aefc-2a536cc9a9bb-kube-api-access-dmhvz\") pod \"goldmane-54d579b49d-d9d6j\" (UID: \"46a5a404-015e-432b-aefc-2a536cc9a9bb\") " pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:50.606148 kubelet[2670]: I0912 17:30:50.605951 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08410efb-654b-423d-9c11-9c6f67a8af4a-calico-apiserver-certs\") pod \"calico-apiserver-557c55577d-bkq8q\" (UID: \"08410efb-654b-423d-9c11-9c6f67a8af4a\") " pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" Sep 12 17:30:50.606148 kubelet[2670]: I0912 17:30:50.605968 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6pl\" (UniqueName: \"kubernetes.io/projected/ffa23418-ac3c-4687-bab6-a901c353ead3-kube-api-access-wh6pl\") pod \"coredns-668d6bf9bc-tx66k\" (UID: \"ffa23418-ac3c-4687-bab6-a901c353ead3\") " pod="kube-system/coredns-668d6bf9bc-tx66k" Sep 12 17:30:50.606148 kubelet[2670]: I0912 17:30:50.605986 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsm57\" (UniqueName: \"kubernetes.io/projected/4abd5bc0-e420-4f4a-8dbb-8520b88f50a2-kube-api-access-hsm57\") pod \"calico-apiserver-557c55577d-vhmbk\" (UID: \"4abd5bc0-e420-4f4a-8dbb-8520b88f50a2\") " pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" Sep 12 17:30:50.606148 kubelet[2670]: I0912 17:30:50.606004 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa23418-ac3c-4687-bab6-a901c353ead3-config-volume\") pod \"coredns-668d6bf9bc-tx66k\" (UID: \"ffa23418-ac3c-4687-bab6-a901c353ead3\") " pod="kube-system/coredns-668d6bf9bc-tx66k" Sep 12 17:30:50.606148 kubelet[2670]: I0912 17:30:50.606021 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a5a404-015e-432b-aefc-2a536cc9a9bb-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-d9d6j\" (UID: \"46a5a404-015e-432b-aefc-2a536cc9a9bb\") " pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:50.606262 kubelet[2670]: I0912 17:30:50.606041 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a5e96d8-a735-4e2c-b56d-4367bb835a56-config-volume\") pod \"coredns-668d6bf9bc-6clxf\" (UID: \"2a5e96d8-a735-4e2c-b56d-4367bb835a56\") " pod="kube-system/coredns-668d6bf9bc-6clxf" Sep 12 17:30:50.606262 kubelet[2670]: I0912 17:30:50.606060 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rt8\" (UniqueName: \"kubernetes.io/projected/943bb34e-492e-4928-829b-cfc792fd8d25-kube-api-access-d6rt8\") pod \"whisker-7449768bbd-nzjr5\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " pod="calico-system/whisker-7449768bbd-nzjr5" Sep 12 17:30:50.606262 kubelet[2670]: I0912 17:30:50.606075 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46a5a404-015e-432b-aefc-2a536cc9a9bb-config\") pod \"goldmane-54d579b49d-d9d6j\" (UID: \"46a5a404-015e-432b-aefc-2a536cc9a9bb\") " pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:50.612399 systemd[1]: Created slice kubepods-besteffort-pod4abd5bc0_e420_4f4a_8dbb_8520b88f50a2.slice - libcontainer container kubepods-besteffort-pod4abd5bc0_e420_4f4a_8dbb_8520b88f50a2.slice. Sep 12 17:30:50.618165 systemd[1]: Created slice kubepods-besteffort-pod46a5a404_015e_432b_aefc_2a536cc9a9bb.slice - libcontainer container kubepods-besteffort-pod46a5a404_015e_432b_aefc_2a536cc9a9bb.slice. Sep 12 17:30:50.817128 systemd[1]: Created slice kubepods-besteffort-podbc7fd84a_a166_4af2_8370_28006dcb2723.slice - libcontainer container kubepods-besteffort-podbc7fd84a_a166_4af2_8370_28006dcb2723.slice. Sep 12 17:30:50.819583 containerd[1514]: time="2025-09-12T17:30:50.819555417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gw4k,Uid:bc7fd84a-a166-4af2-8370-28006dcb2723,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:50.858013 kubelet[2670]: E0912 17:30:50.857939 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:50.858741 containerd[1514]: time="2025-09-12T17:30:50.858704807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tx66k,Uid:ffa23418-ac3c-4687-bab6-a901c353ead3,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:50.893962 kubelet[2670]: E0912 17:30:50.893923 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:50.894753 containerd[1514]: time="2025-09-12T17:30:50.894706925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7449768bbd-nzjr5,Uid:943bb34e-492e-4928-829b-cfc792fd8d25,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:50.895341 containerd[1514]: time="2025-09-12T17:30:50.895310408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6clxf,Uid:2a5e96d8-a735-4e2c-b56d-4367bb835a56,Namespace:kube-system,Attempt:0,}" Sep 12 17:30:50.904810 containerd[1514]: time="2025-09-12T17:30:50.904767660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75597c4f66-6cgkl,Uid:45118968-6c3d-4888-bb4b-65a6e25541e3,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:50.911204 containerd[1514]: time="2025-09-12T17:30:50.910689555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-bkq8q,Uid:08410efb-654b-423d-9c11-9c6f67a8af4a,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:30:50.917483 containerd[1514]: time="2025-09-12T17:30:50.917384020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-vhmbk,Uid:4abd5bc0-e420-4f4a-8dbb-8520b88f50a2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:30:50.921595 containerd[1514]: time="2025-09-12T17:30:50.921504183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d9d6j,Uid:46a5a404-015e-432b-aefc-2a536cc9a9bb,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:50.939995 containerd[1514]: time="2025-09-12T17:30:50.939870032Z" level=error msg="Failed to destroy network for sandbox \"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.943121 containerd[1514]: time="2025-09-12T17:30:50.943081411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:30:50.945907 containerd[1514]: time="2025-09-12T17:30:50.945864353Z" level=error msg="Failed to destroy network for sandbox \"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.957813 containerd[1514]: time="2025-09-12T17:30:50.957150411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gw4k,Uid:bc7fd84a-a166-4af2-8370-28006dcb2723,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.961363 kubelet[2670]: E0912 17:30:50.961279 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.961502 kubelet[2670]: E0912 17:30:50.961387 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:50.962798 kubelet[2670]: E0912 17:30:50.962691 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gw4k" Sep 12 17:30:50.962798 kubelet[2670]: E0912 17:30:50.962790 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gw4k_calico-system(bc7fd84a-a166-4af2-8370-28006dcb2723)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gw4k_calico-system(bc7fd84a-a166-4af2-8370-28006dcb2723)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"487d26d08c8b21517326687807f95992535c4c6f12bdf26616cb80ba9e2dcf03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gw4k" podUID="bc7fd84a-a166-4af2-8370-28006dcb2723" Sep 12 17:30:50.963846 containerd[1514]: time="2025-09-12T17:30:50.962988202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tx66k,Uid:ffa23418-ac3c-4687-bab6-a901c353ead3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.964717 kubelet[2670]: E0912 17:30:50.964648 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:50.964717 kubelet[2670]: E0912 17:30:50.964703 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tx66k" Sep 12 17:30:50.964867 kubelet[2670]: E0912 17:30:50.964722 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tx66k" Sep 12 17:30:50.964867 kubelet[2670]: E0912 17:30:50.964789 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tx66k_kube-system(ffa23418-ac3c-4687-bab6-a901c353ead3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tx66k_kube-system(ffa23418-ac3c-4687-bab6-a901c353ead3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2427793cc50a5a60db643c4683abb173cb6c22cf6631862843c2e09a2970ef42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tx66k" podUID="ffa23418-ac3c-4687-bab6-a901c353ead3" Sep 12 17:30:51.023871 containerd[1514]: time="2025-09-12T17:30:51.023813069Z" level=error msg="Failed to destroy network for sandbox \"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.028023 containerd[1514]: time="2025-09-12T17:30:51.027969276Z" level=error msg="Failed to destroy network for sandbox \"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.028690 containerd[1514]: time="2025-09-12T17:30:51.028645314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75597c4f66-6cgkl,Uid:45118968-6c3d-4888-bb4b-65a6e25541e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.028905 kubelet[2670]: E0912 17:30:51.028865 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.029081 kubelet[2670]: E0912 17:30:51.028927 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" Sep 12 17:30:51.029081 kubelet[2670]: E0912 17:30:51.028948 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" Sep 12 17:30:51.029081 kubelet[2670]: E0912 17:30:51.028987 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75597c4f66-6cgkl_calico-system(45118968-6c3d-4888-bb4b-65a6e25541e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75597c4f66-6cgkl_calico-system(45118968-6c3d-4888-bb4b-65a6e25541e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"579817a95217370ec98372a9904cb69368a645095fb8928072c507e6beee1458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" podUID="45118968-6c3d-4888-bb4b-65a6e25541e3" Sep 12 17:30:51.029504 containerd[1514]: time="2025-09-12T17:30:51.029368862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7449768bbd-nzjr5,Uid:943bb34e-492e-4928-829b-cfc792fd8d25,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.029876 kubelet[2670]: E0912 17:30:51.029645 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.029876 kubelet[2670]: E0912 17:30:51.029681 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7449768bbd-nzjr5" Sep 12 17:30:51.029876 kubelet[2670]: E0912 17:30:51.029696 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7449768bbd-nzjr5" Sep 12 17:30:51.029984 kubelet[2670]: E0912 17:30:51.029733 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7449768bbd-nzjr5_calico-system(943bb34e-492e-4928-829b-cfc792fd8d25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7449768bbd-nzjr5_calico-system(943bb34e-492e-4928-829b-cfc792fd8d25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"828bd965653506686358e93e71179001befb28a6aa37ec9af8bf202b4c94eaf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7449768bbd-nzjr5" podUID="943bb34e-492e-4928-829b-cfc792fd8d25" Sep 12 17:30:51.030155 containerd[1514]: time="2025-09-12T17:30:51.030115087Z" level=error msg="Failed to destroy network for sandbox \"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.031561 containerd[1514]: time="2025-09-12T17:30:51.031384697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-bkq8q,Uid:08410efb-654b-423d-9c11-9c6f67a8af4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.031951 kubelet[2670]: E0912 17:30:51.031894 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.031951 kubelet[2670]: E0912 17:30:51.031941 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" Sep 12 17:30:51.032038 kubelet[2670]: E0912 17:30:51.031957 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" Sep 12 17:30:51.032038 kubelet[2670]: E0912 17:30:51.031996 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557c55577d-bkq8q_calico-apiserver(08410efb-654b-423d-9c11-9c6f67a8af4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557c55577d-bkq8q_calico-apiserver(08410efb-654b-423d-9c11-9c6f67a8af4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d60b593a69f123e92fa6205178450230db3988ef3f5e5cf34008487f44949e79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" podUID="08410efb-654b-423d-9c11-9c6f67a8af4a" Sep 12 17:30:51.037995 containerd[1514]: time="2025-09-12T17:30:51.037936629Z" level=error msg="Failed to destroy network for sandbox \"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.040170 containerd[1514]: time="2025-09-12T17:30:51.040113515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6clxf,Uid:2a5e96d8-a735-4e2c-b56d-4367bb835a56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.040418 kubelet[2670]: E0912 17:30:51.040320 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.040418 kubelet[2670]: E0912 17:30:51.040388 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6clxf" Sep 12 17:30:51.040418 kubelet[2670]: E0912 17:30:51.040406 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6clxf" Sep 12 17:30:51.040724 kubelet[2670]: E0912 17:30:51.040449 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6clxf_kube-system(2a5e96d8-a735-4e2c-b56d-4367bb835a56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6clxf_kube-system(2a5e96d8-a735-4e2c-b56d-4367bb835a56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71abbe39d921b96608fb0be469746e5709132ca1817a7899ec3bf4d92311ba96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6clxf" podUID="2a5e96d8-a735-4e2c-b56d-4367bb835a56" Sep 12 17:30:51.045371 containerd[1514]: time="2025-09-12T17:30:51.045331249Z" level=error msg="Failed to destroy network for sandbox \"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.046359 containerd[1514]: time="2025-09-12T17:30:51.046299473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d9d6j,Uid:46a5a404-015e-432b-aefc-2a536cc9a9bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.046697 kubelet[2670]: E0912 17:30:51.046547 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.046697 kubelet[2670]: E0912 17:30:51.046600 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:51.046697 kubelet[2670]: E0912 17:30:51.046618 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-d9d6j" Sep 12 17:30:51.047081 kubelet[2670]: E0912 17:30:51.046831 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-d9d6j_calico-system(46a5a404-015e-432b-aefc-2a536cc9a9bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-d9d6j_calico-system(46a5a404-015e-432b-aefc-2a536cc9a9bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd378d31c6ce86743e1d380e8a24a3f9c51f0f3ef57ba217dd50b047416787d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-d9d6j" podUID="46a5a404-015e-432b-aefc-2a536cc9a9bb" Sep 12 17:30:51.047249 containerd[1514]: time="2025-09-12T17:30:51.046986949Z" level=error msg="Failed to destroy network for sandbox \"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.048979 containerd[1514]: time="2025-09-12T17:30:51.048706717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-vhmbk,Uid:4abd5bc0-e420-4f4a-8dbb-8520b88f50a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.049145 kubelet[2670]: E0912 17:30:51.048901 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:30:51.049145 kubelet[2670]: E0912 17:30:51.048952 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" Sep 12 17:30:51.049145 kubelet[2670]: E0912 17:30:51.048969 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" Sep 12 17:30:51.049222 kubelet[2670]: E0912 17:30:51.049016 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557c55577d-vhmbk_calico-apiserver(4abd5bc0-e420-4f4a-8dbb-8520b88f50a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557c55577d-vhmbk_calico-apiserver(4abd5bc0-e420-4f4a-8dbb-8520b88f50a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f16c126d51a84c4130c37b0cb3b3952b2334a60a2aec3f63561aa3fd1eaa1a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" podUID="4abd5bc0-e420-4f4a-8dbb-8520b88f50a2" Sep 12 17:30:51.780810 systemd[1]: run-netns-cni\x2d9948d5f8\x2d8eea\x2d7922\x2da660\x2d530a53c1e309.mount: Deactivated successfully. Sep 12 17:30:51.780908 systemd[1]: run-netns-cni\x2d50ef248d\x2d13ea\x2d24f9\x2d6746\x2dd32e67b24fdd.mount: Deactivated successfully. Sep 12 17:30:51.780952 systemd[1]: run-netns-cni\x2dca6d30ca\x2db94c\x2d99ea\x2dc3ac\x2dc13ca5942291.mount: Deactivated successfully. Sep 12 17:30:51.780994 systemd[1]: run-netns-cni\x2d15bb8a98\x2da691\x2d6af3\x2d1b90\x2d814aea97c9e7.mount: Deactivated successfully. Sep 12 17:30:54.906615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108206653.mount: Deactivated successfully. Sep 12 17:30:55.204476 containerd[1514]: time="2025-09-12T17:30:55.204340951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:30:55.206102 containerd[1514]: time="2025-09-12T17:30:55.206054591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:55.207010 containerd[1514]: time="2025-09-12T17:30:55.206970663Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:55.207578 containerd[1514]: time="2025-09-12T17:30:55.207549022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:55.208544 containerd[1514]: time="2025-09-12T17:30:55.208167255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.265046851s" Sep 12 17:30:55.208544 containerd[1514]: time="2025-09-12T17:30:55.208197691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:30:55.218564 containerd[1514]: time="2025-09-12T17:30:55.218520845Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:30:55.242313 containerd[1514]: time="2025-09-12T17:30:55.242268320Z" level=info msg="Container 9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:55.254357 containerd[1514]: time="2025-09-12T17:30:55.254200489Z" level=info msg="CreateContainer within sandbox \"4b535682434efc43cd779f779a65581bc7eab852a172d2da7ed7cfe7706b1599\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\"" Sep 12 17:30:55.255131 containerd[1514]: time="2025-09-12T17:30:55.254997737Z" level=info msg="StartContainer for \"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\"" Sep 12 17:30:55.258483 containerd[1514]: time="2025-09-12T17:30:55.258448894Z" level=info msg="connecting to shim 9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299" address="unix:///run/containerd/s/df9e6381352c7841c833b414a66bd4633fe4b444b0a313545055e81212b7bb42" protocol=ttrpc version=3 Sep 12 17:30:55.278748 systemd[1]: Started cri-containerd-9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299.scope - libcontainer container 9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299. Sep 12 17:30:55.319561 containerd[1514]: time="2025-09-12T17:30:55.319427875Z" level=info msg="StartContainer for \"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\" returns successfully" Sep 12 17:30:55.446449 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:30:55.446616 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:30:55.643466 kubelet[2670]: I0912 17:30:55.643397 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-ca-bundle\") pod \"943bb34e-492e-4928-829b-cfc792fd8d25\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " Sep 12 17:30:55.643466 kubelet[2670]: I0912 17:30:55.643448 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6rt8\" (UniqueName: \"kubernetes.io/projected/943bb34e-492e-4928-829b-cfc792fd8d25-kube-api-access-d6rt8\") pod \"943bb34e-492e-4928-829b-cfc792fd8d25\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " Sep 12 17:30:55.643922 kubelet[2670]: I0912 17:30:55.643479 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-backend-key-pair\") pod \"943bb34e-492e-4928-829b-cfc792fd8d25\" (UID: \"943bb34e-492e-4928-829b-cfc792fd8d25\") " Sep 12 17:30:55.649055 kubelet[2670]: I0912 17:30:55.648989 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "943bb34e-492e-4928-829b-cfc792fd8d25" (UID: "943bb34e-492e-4928-829b-cfc792fd8d25"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:30:55.652812 kubelet[2670]: I0912 17:30:55.652751 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943bb34e-492e-4928-829b-cfc792fd8d25-kube-api-access-d6rt8" (OuterVolumeSpecName: "kube-api-access-d6rt8") pod "943bb34e-492e-4928-829b-cfc792fd8d25" (UID: "943bb34e-492e-4928-829b-cfc792fd8d25"). InnerVolumeSpecName "kube-api-access-d6rt8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:30:55.653287 kubelet[2670]: I0912 17:30:55.653254 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "943bb34e-492e-4928-829b-cfc792fd8d25" (UID: "943bb34e-492e-4928-829b-cfc792fd8d25"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:30:55.744673 kubelet[2670]: I0912 17:30:55.744610 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6rt8\" (UniqueName: \"kubernetes.io/projected/943bb34e-492e-4928-829b-cfc792fd8d25-kube-api-access-d6rt8\") on node \"localhost\" DevicePath \"\"" Sep 12 17:30:55.744673 kubelet[2670]: I0912 17:30:55.744646 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:30:55.744673 kubelet[2670]: I0912 17:30:55.744656 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943bb34e-492e-4928-829b-cfc792fd8d25-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:30:55.907304 systemd[1]: var-lib-kubelet-pods-943bb34e\x2d492e\x2d4928\x2d829b\x2dcfc792fd8d25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6rt8.mount: Deactivated successfully. Sep 12 17:30:55.907412 systemd[1]: var-lib-kubelet-pods-943bb34e\x2d492e\x2d4928\x2d829b\x2dcfc792fd8d25-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:30:55.962625 systemd[1]: Removed slice kubepods-besteffort-pod943bb34e_492e_4928_829b_cfc792fd8d25.slice - libcontainer container kubepods-besteffort-pod943bb34e_492e_4928_829b_cfc792fd8d25.slice. Sep 12 17:30:55.975779 kubelet[2670]: I0912 17:30:55.975716 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zxf2h" podStartSLOduration=1.547915261 podStartE2EDuration="13.975700306s" podCreationTimestamp="2025-09-12 17:30:42 +0000 UTC" firstStartedPulling="2025-09-12 17:30:42.781387789 +0000 UTC m=+20.054643618" lastFinishedPulling="2025-09-12 17:30:55.209172834 +0000 UTC m=+32.482428663" observedRunningTime="2025-09-12 17:30:55.974559312 +0000 UTC m=+33.247815221" watchObservedRunningTime="2025-09-12 17:30:55.975700306 +0000 UTC m=+33.248956095" Sep 12 17:30:56.043804 systemd[1]: Created slice kubepods-besteffort-pod38422aa9_4df8_4a36_bae5_93534ca02faa.slice - libcontainer container kubepods-besteffort-pod38422aa9_4df8_4a36_bae5_93534ca02faa.slice. Sep 12 17:30:56.147642 kubelet[2670]: I0912 17:30:56.147598 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38422aa9-4df8-4a36-bae5-93534ca02faa-whisker-backend-key-pair\") pod \"whisker-5599556c7f-l26nh\" (UID: \"38422aa9-4df8-4a36-bae5-93534ca02faa\") " pod="calico-system/whisker-5599556c7f-l26nh" Sep 12 17:30:56.147642 kubelet[2670]: I0912 17:30:56.147645 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlj2\" (UniqueName: \"kubernetes.io/projected/38422aa9-4df8-4a36-bae5-93534ca02faa-kube-api-access-xnlj2\") pod \"whisker-5599556c7f-l26nh\" (UID: \"38422aa9-4df8-4a36-bae5-93534ca02faa\") " pod="calico-system/whisker-5599556c7f-l26nh" Sep 12 17:30:56.147829 kubelet[2670]: I0912 17:30:56.147669 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38422aa9-4df8-4a36-bae5-93534ca02faa-whisker-ca-bundle\") pod \"whisker-5599556c7f-l26nh\" (UID: \"38422aa9-4df8-4a36-bae5-93534ca02faa\") " pod="calico-system/whisker-5599556c7f-l26nh" Sep 12 17:30:56.347180 containerd[1514]: time="2025-09-12T17:30:56.347134967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5599556c7f-l26nh,Uid:38422aa9-4df8-4a36-bae5-93534ca02faa,Namespace:calico-system,Attempt:0,}" Sep 12 17:30:56.581179 systemd-networkd[1461]: cali79415779a96: Link UP Sep 12 17:30:56.581379 systemd-networkd[1461]: cali79415779a96: Gained carrier Sep 12 17:30:56.598314 containerd[1514]: 2025-09-12 17:30:56.392 [INFO][3817] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:30:56.598314 containerd[1514]: 2025-09-12 17:30:56.444 [INFO][3817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5599556c7f--l26nh-eth0 whisker-5599556c7f- calico-system 38422aa9-4df8-4a36-bae5-93534ca02faa 882 0 2025-09-12 17:30:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5599556c7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5599556c7f-l26nh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali79415779a96 [] [] }} ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-" Sep 12 17:30:56.598314 containerd[1514]: 2025-09-12 17:30:56.445 [INFO][3817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.598314 containerd[1514]: 2025-09-12 17:30:56.522 [INFO][3832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" HandleID="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Workload="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.523 [INFO][3832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" HandleID="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Workload="localhost-k8s-whisker--5599556c7f--l26nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ff0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5599556c7f-l26nh", "timestamp":"2025-09-12 17:30:56.52291006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.523 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.523 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.523 [INFO][3832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.534 [INFO][3832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" host="localhost" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.540 [INFO][3832] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.545 [INFO][3832] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.548 [INFO][3832] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.551 [INFO][3832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:30:56.598774 containerd[1514]: 2025-09-12 17:30:56.551 [INFO][3832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" host="localhost" Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.553 [INFO][3832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4 Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.558 [INFO][3832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" host="localhost" Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.564 [INFO][3832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" host="localhost" Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.564 [INFO][3832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" host="localhost" Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.565 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:30:56.599050 containerd[1514]: 2025-09-12 17:30:56.565 [INFO][3832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" HandleID="k8s-pod-network.4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Workload="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.599178 containerd[1514]: 2025-09-12 17:30:56.567 [INFO][3817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5599556c7f--l26nh-eth0", GenerateName:"whisker-5599556c7f-", Namespace:"calico-system", SelfLink:"", UID:"38422aa9-4df8-4a36-bae5-93534ca02faa", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5599556c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5599556c7f-l26nh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79415779a96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:30:56.599178 containerd[1514]: 2025-09-12 17:30:56.567 [INFO][3817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.599269 containerd[1514]: 2025-09-12 17:30:56.568 [INFO][3817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79415779a96 ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.599269 containerd[1514]: 2025-09-12 17:30:56.581 [INFO][3817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.599307 containerd[1514]: 2025-09-12 17:30:56.582 [INFO][3817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5599556c7f--l26nh-eth0", GenerateName:"whisker-5599556c7f-", Namespace:"calico-system", SelfLink:"", UID:"38422aa9-4df8-4a36-bae5-93534ca02faa", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5599556c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4", Pod:"whisker-5599556c7f-l26nh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79415779a96", MAC:"9a:66:a6:86:af:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:30:56.599374 containerd[1514]: 2025-09-12 17:30:56.595 [INFO][3817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" Namespace="calico-system" Pod="whisker-5599556c7f-l26nh" WorkloadEndpoint="localhost-k8s-whisker--5599556c7f--l26nh-eth0" Sep 12 17:30:56.645381 containerd[1514]: time="2025-09-12T17:30:56.644872767Z" level=info msg="connecting to shim 4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4" address="unix:///run/containerd/s/3bdd5d91a9a0c9ba80e26c18aaf025aa80acbbb9570ad71364b60d8c4410dd03" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:30:56.688784 systemd[1]: Started cri-containerd-4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4.scope - libcontainer container 4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4. Sep 12 17:30:56.700742 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:30:56.723813 containerd[1514]: time="2025-09-12T17:30:56.723775503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5599556c7f-l26nh,Uid:38422aa9-4df8-4a36-bae5-93534ca02faa,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4\"" Sep 12 17:30:56.725708 containerd[1514]: time="2025-09-12T17:30:56.725674261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:30:56.814499 kubelet[2670]: I0912 17:30:56.814140 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943bb34e-492e-4928-829b-cfc792fd8d25" path="/var/lib/kubelet/pods/943bb34e-492e-4928-829b-cfc792fd8d25/volumes" Sep 12 17:30:57.122047 containerd[1514]: time="2025-09-12T17:30:57.121984208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\" id:\"585105a8438eafc34a2239669acc4a347668e1a98a42b257ae3a2499559a249f\" pid:4007 exit_status:1 exited_at:{seconds:1757698257 nanos:121647102}" Sep 12 17:30:57.351323 kubelet[2670]: I0912 17:30:57.350692 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:30:57.351323 kubelet[2670]: E0912 17:30:57.351088 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:57.662672 systemd-networkd[1461]: cali79415779a96: Gained IPv6LL Sep 12 17:30:57.844598 containerd[1514]: time="2025-09-12T17:30:57.844120021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:57.844982 containerd[1514]: time="2025-09-12T17:30:57.844860510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:30:57.846207 containerd[1514]: time="2025-09-12T17:30:57.846150216Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:57.849728 containerd[1514]: time="2025-09-12T17:30:57.849670187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:30:57.850477 containerd[1514]: time="2025-09-12T17:30:57.850336879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.124622861s" Sep 12 17:30:57.850477 containerd[1514]: time="2025-09-12T17:30:57.850370158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:30:57.853856 containerd[1514]: time="2025-09-12T17:30:57.853825372Z" level=info msg="CreateContainer within sandbox \"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:30:57.869595 containerd[1514]: time="2025-09-12T17:30:57.869548989Z" level=info msg="Container d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:30:57.881463 containerd[1514]: time="2025-09-12T17:30:57.881381489Z" level=info msg="CreateContainer within sandbox \"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda\"" Sep 12 17:30:57.881838 containerd[1514]: time="2025-09-12T17:30:57.881815471Z" level=info msg="StartContainer for \"d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda\"" Sep 12 17:30:57.882825 containerd[1514]: time="2025-09-12T17:30:57.882794150Z" level=info msg="connecting to shim d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda" address="unix:///run/containerd/s/3bdd5d91a9a0c9ba80e26c18aaf025aa80acbbb9570ad71364b60d8c4410dd03" protocol=ttrpc version=3 Sep 12 17:30:57.905721 systemd[1]: Started cri-containerd-d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda.scope - libcontainer container d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda. Sep 12 17:30:57.948498 containerd[1514]: time="2025-09-12T17:30:57.948390622Z" level=info msg="StartContainer for \"d9d4d2066c61f2fe0bb83c08d44899248aa61f70a5183f0da4f8da7922dfccda\" returns successfully" Sep 12 17:30:57.949993 containerd[1514]: time="2025-09-12T17:30:57.949967756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:30:57.973255 kubelet[2670]: E0912 17:30:57.972998 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:30:58.081292 containerd[1514]: time="2025-09-12T17:30:58.081249190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\" id:\"ede0f474b90a1d449ea944baaa9ff035e84367af4acd9e9ad95d6fcd579e2af4\" pid:4071 exit_status:1 exited_at:{seconds:1757698258 nanos:80810488}" Sep 12 17:30:58.394553 systemd-networkd[1461]: vxlan.calico: Link UP Sep 12 17:30:58.394562 systemd-networkd[1461]: vxlan.calico: Gained carrier Sep 12 17:30:59.903662 systemd-networkd[1461]: vxlan.calico: Gained IPv6LL Sep 12 17:31:00.344982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097998028.mount: Deactivated successfully. Sep 12 17:31:00.365524 containerd[1514]: time="2025-09-12T17:31:00.365471230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:00.366108 containerd[1514]: time="2025-09-12T17:31:00.366060847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:31:00.367463 containerd[1514]: time="2025-09-12T17:31:00.367435954Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:00.370500 containerd[1514]: time="2025-09-12T17:31:00.370455157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:00.371115 containerd[1514]: time="2025-09-12T17:31:00.370923339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.420919944s" Sep 12 17:31:00.371115 containerd[1514]: time="2025-09-12T17:31:00.370958617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:31:00.373367 containerd[1514]: time="2025-09-12T17:31:00.373331765Z" level=info msg="CreateContainer within sandbox \"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:31:00.452812 containerd[1514]: time="2025-09-12T17:31:00.452729725Z" level=info msg="Container f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:00.462778 containerd[1514]: time="2025-09-12T17:31:00.462713178Z" level=info msg="CreateContainer within sandbox \"4ab97b97c81d4f645597bf46cec8cff5b8cbfaf8d5bbd750cd8548c2fac233e4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8\"" Sep 12 17:31:00.464183 containerd[1514]: time="2025-09-12T17:31:00.463485148Z" level=info msg="StartContainer for \"f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8\"" Sep 12 17:31:00.464719 containerd[1514]: time="2025-09-12T17:31:00.464684221Z" level=info msg="connecting to shim f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8" address="unix:///run/containerd/s/3bdd5d91a9a0c9ba80e26c18aaf025aa80acbbb9570ad71364b60d8c4410dd03" protocol=ttrpc version=3 Sep 12 17:31:00.502750 systemd[1]: Started cri-containerd-f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8.scope - libcontainer container f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8. Sep 12 17:31:00.555354 containerd[1514]: time="2025-09-12T17:31:00.555318585Z" level=info msg="StartContainer for \"f034d56b95b7b1a3fbd139fc06eba2b24157def6825d3ceb71a34f556cb48da8\" returns successfully" Sep 12 17:31:01.013090 kubelet[2670]: I0912 17:31:01.013012 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5599556c7f-l26nh" podStartSLOduration=1.366189707 podStartE2EDuration="5.012990322s" podCreationTimestamp="2025-09-12 17:30:56 +0000 UTC" firstStartedPulling="2025-09-12 17:30:56.725116125 +0000 UTC m=+33.998371954" lastFinishedPulling="2025-09-12 17:31:00.37191674 +0000 UTC m=+37.645172569" observedRunningTime="2025-09-12 17:31:01.010582613 +0000 UTC m=+38.283838442" watchObservedRunningTime="2025-09-12 17:31:01.012990322 +0000 UTC m=+38.286246111" Sep 12 17:31:01.812418 kubelet[2670]: E0912 17:31:01.812295 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:01.812418 kubelet[2670]: E0912 17:31:01.812326 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:01.814150 containerd[1514]: time="2025-09-12T17:31:01.812942138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6clxf,Uid:2a5e96d8-a735-4e2c-b56d-4367bb835a56,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:01.815105 containerd[1514]: time="2025-09-12T17:31:01.814258408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tx66k,Uid:ffa23418-ac3c-4687-bab6-a901c353ead3,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:01.977821 systemd-networkd[1461]: cali27639905aef: Link UP Sep 12 17:31:01.978153 systemd-networkd[1461]: cali27639905aef: Gained carrier Sep 12 17:31:01.996177 containerd[1514]: 2025-09-12 17:31:01.883 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tx66k-eth0 coredns-668d6bf9bc- kube-system ffa23418-ac3c-4687-bab6-a901c353ead3 809 0 2025-09-12 17:30:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tx66k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27639905aef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-" Sep 12 17:31:01.996177 containerd[1514]: 2025-09-12 17:31:01.884 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996177 containerd[1514]: 2025-09-12 17:31:01.916 [INFO][4300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" HandleID="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Workload="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.916 [INFO][4300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" HandleID="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Workload="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c38b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tx66k", "timestamp":"2025-09-12 17:31:01.916464512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.917 [INFO][4300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.917 [INFO][4300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.917 [INFO][4300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.929 [INFO][4300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" host="localhost" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.940 [INFO][4300] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.948 [INFO][4300] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.952 [INFO][4300] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.954 [INFO][4300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:01.996419 containerd[1514]: 2025-09-12 17:31:01.955 [INFO][4300] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" host="localhost" Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.956 [INFO][4300] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706 Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.962 [INFO][4300] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" host="localhost" Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4300] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" host="localhost" Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" host="localhost" Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:01.996663 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" HandleID="k8s-pod-network.c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Workload="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996782 containerd[1514]: 2025-09-12 17:31:01.973 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tx66k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ffa23418-ac3c-4687-bab6-a901c353ead3", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tx66k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27639905aef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:01.996897 containerd[1514]: 2025-09-12 17:31:01.973 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996897 containerd[1514]: 2025-09-12 17:31:01.973 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27639905aef ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996897 containerd[1514]: 2025-09-12 17:31:01.978 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:01.996972 containerd[1514]: 2025-09-12 17:31:01.978 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tx66k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ffa23418-ac3c-4687-bab6-a901c353ead3", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706", Pod:"coredns-668d6bf9bc-tx66k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27639905aef", MAC:"c6:49:41:85:a4:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:01.996972 containerd[1514]: 2025-09-12 17:31:01.992 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" Namespace="kube-system" Pod="coredns-668d6bf9bc-tx66k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tx66k-eth0" Sep 12 17:31:02.059719 containerd[1514]: time="2025-09-12T17:31:02.059672608Z" level=info msg="connecting to shim c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706" address="unix:///run/containerd/s/5d496b0336f90165e6ad114d159806df9684ec15ac6734d985050c719467d5de" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:02.081894 systemd[1]: Started cri-containerd-c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706.scope - libcontainer container c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706. Sep 12 17:31:02.085691 systemd-networkd[1461]: cali78c5dbaa097: Link UP Sep 12 17:31:02.089267 systemd-networkd[1461]: cali78c5dbaa097: Gained carrier Sep 12 17:31:02.103590 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.884 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6clxf-eth0 coredns-668d6bf9bc- kube-system 2a5e96d8-a735-4e2c-b56d-4367bb835a56 819 0 2025-09-12 17:30:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6clxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali78c5dbaa097 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.884 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.923 [INFO][4301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" HandleID="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Workload="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.924 [INFO][4301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" HandleID="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Workload="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051a5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6clxf", "timestamp":"2025-09-12 17:31:01.923831474 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.924 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:01.971 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.034 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.045 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.052 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.054 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.057 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.058 [INFO][4301] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.060 [INFO][4301] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171 Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.066 [INFO][4301] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.080 [INFO][4301] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.080 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" host="localhost" Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.080 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:02.111683 containerd[1514]: 2025-09-12 17:31:02.080 [INFO][4301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" HandleID="k8s-pod-network.bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Workload="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.083 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6clxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a5e96d8-a735-4e2c-b56d-4367bb835a56", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6clxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78c5dbaa097", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.083 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.083 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78c5dbaa097 ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.091 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.092 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6clxf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a5e96d8-a735-4e2c-b56d-4367bb835a56", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171", Pod:"coredns-668d6bf9bc-6clxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78c5dbaa097", MAC:"8a:06:0a:65:e4:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:02.112230 containerd[1514]: 2025-09-12 17:31:02.106 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" Namespace="kube-system" Pod="coredns-668d6bf9bc-6clxf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6clxf-eth0" Sep 12 17:31:02.138183 containerd[1514]: time="2025-09-12T17:31:02.138056571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tx66k,Uid:ffa23418-ac3c-4687-bab6-a901c353ead3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706\"" Sep 12 17:31:02.138903 kubelet[2670]: E0912 17:31:02.138876 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:02.140916 containerd[1514]: time="2025-09-12T17:31:02.140849108Z" level=info msg="CreateContainer within sandbox \"c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:31:02.158782 containerd[1514]: time="2025-09-12T17:31:02.158733372Z" level=info msg="Container ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:02.161954 containerd[1514]: time="2025-09-12T17:31:02.161905256Z" level=info msg="connecting to shim bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171" address="unix:///run/containerd/s/98bee1c65603d9e285661a793f7f3121ba970fd66abe61377f80add0d82cb77c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:02.168030 containerd[1514]: time="2025-09-12T17:31:02.167968473Z" level=info msg="CreateContainer within sandbox \"c08a52060dd13f4694a1ef07093163b7ae7a20bc40225358edd3b925471c9706\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88\"" Sep 12 17:31:02.170379 containerd[1514]: time="2025-09-12T17:31:02.170322987Z" level=info msg="StartContainer for \"ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88\"" Sep 12 17:31:02.172507 containerd[1514]: time="2025-09-12T17:31:02.171639058Z" level=info msg="connecting to shim ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88" address="unix:///run/containerd/s/5d496b0336f90165e6ad114d159806df9684ec15ac6734d985050c719467d5de" protocol=ttrpc version=3 Sep 12 17:31:02.185761 systemd[1]: Started cri-containerd-bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171.scope - libcontainer container bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171. Sep 12 17:31:02.189458 systemd[1]: Started cri-containerd-ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88.scope - libcontainer container ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88. Sep 12 17:31:02.203689 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:02.229260 containerd[1514]: time="2025-09-12T17:31:02.229127268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6clxf,Uid:2a5e96d8-a735-4e2c-b56d-4367bb835a56,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171\"" Sep 12 17:31:02.230236 kubelet[2670]: E0912 17:31:02.230026 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:02.233519 containerd[1514]: time="2025-09-12T17:31:02.233245557Z" level=info msg="CreateContainer within sandbox \"bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:31:02.244701 containerd[1514]: time="2025-09-12T17:31:02.244660258Z" level=info msg="StartContainer for \"ab2749c81f0ffcc002b9dadb38bf5e827917bcfb5149ae75a6caa94c3a252f88\" returns successfully" Sep 12 17:31:02.250233 containerd[1514]: time="2025-09-12T17:31:02.250175656Z" level=info msg="Container a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:02.260715 containerd[1514]: time="2025-09-12T17:31:02.260584234Z" level=info msg="CreateContainer within sandbox \"bd6dfff5c05fed00d7b90cfb6646acecd8dddc0096b59a1db5a3ad8fc218b171\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6\"" Sep 12 17:31:02.261426 containerd[1514]: time="2025-09-12T17:31:02.261306607Z" level=info msg="StartContainer for \"a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6\"" Sep 12 17:31:02.264109 containerd[1514]: time="2025-09-12T17:31:02.264075705Z" level=info msg="connecting to shim a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6" address="unix:///run/containerd/s/98bee1c65603d9e285661a793f7f3121ba970fd66abe61377f80add0d82cb77c" protocol=ttrpc version=3 Sep 12 17:31:02.290757 systemd[1]: Started cri-containerd-a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6.scope - libcontainer container a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6. Sep 12 17:31:02.332363 containerd[1514]: time="2025-09-12T17:31:02.332218964Z" level=info msg="StartContainer for \"a6c97b6038ca665876db881c827fc0c14a4b6f2f11898ef9d572b8d379572cb6\" returns successfully" Sep 12 17:31:02.986798 kubelet[2670]: E0912 17:31:02.986757 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:02.991908 kubelet[2670]: E0912 17:31:02.991877 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:03.007323 kubelet[2670]: I0912 17:31:03.007180 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6clxf" podStartSLOduration=34.007162198 podStartE2EDuration="34.007162198s" podCreationTimestamp="2025-09-12 17:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:02.999884178 +0000 UTC m=+40.273140007" watchObservedRunningTime="2025-09-12 17:31:03.007162198 +0000 UTC m=+40.280418027" Sep 12 17:31:03.033214 kubelet[2670]: I0912 17:31:03.033141 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tx66k" podStartSLOduration=34.033122231 podStartE2EDuration="34.033122231s" podCreationTimestamp="2025-09-12 17:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:03.018843061 +0000 UTC m=+40.292098890" watchObservedRunningTime="2025-09-12 17:31:03.033122231 +0000 UTC m=+40.306378060" Sep 12 17:31:03.422707 systemd-networkd[1461]: cali27639905aef: Gained IPv6LL Sep 12 17:31:03.742701 systemd-networkd[1461]: cali78c5dbaa097: Gained IPv6LL Sep 12 17:31:03.812075 containerd[1514]: time="2025-09-12T17:31:03.811970020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-bkq8q,Uid:08410efb-654b-423d-9c11-9c6f67a8af4a,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:31:03.986093 systemd-networkd[1461]: cali7161e72975d: Link UP Sep 12 17:31:03.986520 systemd-networkd[1461]: cali7161e72975d: Gained carrier Sep 12 17:31:04.001253 kubelet[2670]: E0912 17:31:04.001032 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:04.001253 kubelet[2670]: E0912 17:31:04.001046 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.859 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0 calico-apiserver-557c55577d- calico-apiserver 08410efb-654b-423d-9c11-9c6f67a8af4a 818 0 2025-09-12 17:30:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:557c55577d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-557c55577d-bkq8q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7161e72975d [] [] }} ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.859 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.900 [INFO][4527] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" HandleID="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Workload="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.900 [INFO][4527] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" HandleID="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Workload="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-557c55577d-bkq8q", "timestamp":"2025-09-12 17:31:03.900436621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.900 [INFO][4527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.900 [INFO][4527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.900 [INFO][4527] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.910 [INFO][4527] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.920 [INFO][4527] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.927 [INFO][4527] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.929 [INFO][4527] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.934 [INFO][4527] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.934 [INFO][4527] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.937 [INFO][4527] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.955 [INFO][4527] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.980 [INFO][4527] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.980 [INFO][4527] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" host="localhost" Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.980 [INFO][4527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:04.021895 containerd[1514]: 2025-09-12 17:31:03.980 [INFO][4527] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" HandleID="k8s-pod-network.16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Workload="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:03.982 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0", GenerateName:"calico-apiserver-557c55577d-", Namespace:"calico-apiserver", SelfLink:"", UID:"08410efb-654b-423d-9c11-9c6f67a8af4a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c55577d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-557c55577d-bkq8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7161e72975d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:03.982 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:03.982 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7161e72975d ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:03.987 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:03.989 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0", GenerateName:"calico-apiserver-557c55577d-", Namespace:"calico-apiserver", SelfLink:"", UID:"08410efb-654b-423d-9c11-9c6f67a8af4a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c55577d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac", Pod:"calico-apiserver-557c55577d-bkq8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7161e72975d", MAC:"72:5b:19:ba:e0:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:04.022641 containerd[1514]: 2025-09-12 17:31:04.017 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-bkq8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--bkq8q-eth0" Sep 12 17:31:04.063246 containerd[1514]: time="2025-09-12T17:31:04.063171510Z" level=info msg="connecting to shim 16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac" address="unix:///run/containerd/s/1d44a9e3d999170e56862c8df3153901a43c394a8e5abfcdcf7e920625f48f5e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:04.094732 systemd[1]: Started cri-containerd-16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac.scope - libcontainer container 16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac. Sep 12 17:31:04.109525 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:04.130255 containerd[1514]: time="2025-09-12T17:31:04.130215581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-bkq8q,Uid:08410efb-654b-423d-9c11-9c6f67a8af4a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac\"" Sep 12 17:31:04.131794 containerd[1514]: time="2025-09-12T17:31:04.131761087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:31:04.812958 containerd[1514]: time="2025-09-12T17:31:04.812902623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gw4k,Uid:bc7fd84a-a166-4af2-8370-28006dcb2723,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:04.971879 systemd-networkd[1461]: calia95627f54bc: Link UP Sep 12 17:31:04.972730 systemd-networkd[1461]: calia95627f54bc: Gained carrier Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.861 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9gw4k-eth0 csi-node-driver- calico-system bc7fd84a-a166-4af2-8370-28006dcb2723 715 0 2025-09-12 17:30:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9gw4k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia95627f54bc [] [] }} ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.862 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.905 [INFO][4608] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" HandleID="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Workload="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.905 [INFO][4608] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" HandleID="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Workload="localhost-k8s-csi--node--driver--9gw4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005160a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9gw4k", "timestamp":"2025-09-12 17:31:04.905135098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.905 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.905 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.905 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.915 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.921 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.926 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.928 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.931 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.931 [INFO][4608] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.933 [INFO][4608] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908 Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.943 [INFO][4608] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.965 [INFO][4608] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.965 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" host="localhost" Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.965 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:04.987699 containerd[1514]: 2025-09-12 17:31:04.965 [INFO][4608] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" HandleID="k8s-pod-network.ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Workload="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.969 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gw4k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc7fd84a-a166-4af2-8370-28006dcb2723", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9gw4k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia95627f54bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.969 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.969 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia95627f54bc ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.973 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.973 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gw4k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bc7fd84a-a166-4af2-8370-28006dcb2723", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908", Pod:"csi-node-driver-9gw4k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia95627f54bc", MAC:"52:2b:2a:fe:ee:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:04.988268 containerd[1514]: 2025-09-12 17:31:04.984 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" Namespace="calico-system" Pod="csi-node-driver-9gw4k" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gw4k-eth0" Sep 12 17:31:05.009962 containerd[1514]: time="2025-09-12T17:31:05.009907267Z" level=info msg="connecting to shim ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908" address="unix:///run/containerd/s/33315dc3b572f632fea0aa172ac885867ba0b9a681df833acd5bc0eb294e3730" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:05.011307 kubelet[2670]: E0912 17:31:05.011237 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:05.013032 kubelet[2670]: E0912 17:31:05.013000 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:05.022690 systemd-networkd[1461]: cali7161e72975d: Gained IPv6LL Sep 12 17:31:05.038741 systemd[1]: Started cri-containerd-ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908.scope - libcontainer container ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908. Sep 12 17:31:05.064728 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:05.087565 containerd[1514]: time="2025-09-12T17:31:05.087408167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gw4k,Uid:bc7fd84a-a166-4af2-8370-28006dcb2723,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908\"" Sep 12 17:31:05.317803 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). Sep 12 17:31:05.389196 sshd[4675]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:05.391456 sshd-session[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:05.397213 systemd-logind[1496]: New session 8 of user core. Sep 12 17:31:05.406745 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:31:05.666056 sshd[4678]: Connection closed by 10.0.0.1 port 56104 Sep 12 17:31:05.666406 sshd-session[4675]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:05.670480 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:56104.service: Deactivated successfully. Sep 12 17:31:05.675357 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:31:05.676899 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:31:05.678511 systemd-logind[1496]: Removed session 8. Sep 12 17:31:05.812100 containerd[1514]: time="2025-09-12T17:31:05.812033869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75597c4f66-6cgkl,Uid:45118968-6c3d-4888-bb4b-65a6e25541e3,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:05.812947 containerd[1514]: time="2025-09-12T17:31:05.812919359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d9d6j,Uid:46a5a404-015e-432b-aefc-2a536cc9a9bb,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:05.995515 systemd-networkd[1461]: calieaf1062e31f: Link UP Sep 12 17:31:05.996418 systemd-networkd[1461]: calieaf1062e31f: Gained carrier Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.900 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--d9d6j-eth0 goldmane-54d579b49d- calico-system 46a5a404-015e-432b-aefc-2a536cc9a9bb 816 0 2025-09-12 17:30:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-d9d6j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calieaf1062e31f [] [] }} ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.900 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.937 [INFO][4726] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" HandleID="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Workload="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.937 [INFO][4726] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" HandleID="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Workload="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-d9d6j", "timestamp":"2025-09-12 17:31:05.937263875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.937 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.937 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.937 [INFO][4726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.950 [INFO][4726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.957 [INFO][4726] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.962 [INFO][4726] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.966 [INFO][4726] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.969 [INFO][4726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.969 [INFO][4726] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.971 [INFO][4726] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.976 [INFO][4726] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.984 [INFO][4726] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.985 [INFO][4726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" host="localhost" Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.985 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:06.012913 containerd[1514]: 2025-09-12 17:31:05.985 [INFO][4726] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" HandleID="k8s-pod-network.3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Workload="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:05.989 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d9d6j-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46a5a404-015e-432b-aefc-2a536cc9a9bb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-d9d6j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieaf1062e31f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:05.989 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:05.990 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieaf1062e31f ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:05.997 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:05.998 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--d9d6j-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46a5a404-015e-432b-aefc-2a536cc9a9bb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c", Pod:"goldmane-54d579b49d-d9d6j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieaf1062e31f", MAC:"1e:7f:59:0e:5f:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:06.014174 containerd[1514]: 2025-09-12 17:31:06.008 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" Namespace="calico-system" Pod="goldmane-54d579b49d-d9d6j" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--d9d6j-eth0" Sep 12 17:31:06.045569 containerd[1514]: time="2025-09-12T17:31:06.045192987Z" level=info msg="connecting to shim 3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c" address="unix:///run/containerd/s/6d6bc322fc7e8d73480990aa83f292719944f27dda58b3fe0d8f12605f07e6ca" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:06.081735 systemd[1]: Started cri-containerd-3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c.scope - libcontainer container 3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c. Sep 12 17:31:06.100936 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:06.108307 systemd-networkd[1461]: cali2f0fa40a640: Link UP Sep 12 17:31:06.109322 systemd-networkd[1461]: cali2f0fa40a640: Gained carrier Sep 12 17:31:06.111784 systemd-networkd[1461]: calia95627f54bc: Gained IPv6LL Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.902 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0 calico-kube-controllers-75597c4f66- calico-system 45118968-6c3d-4888-bb4b-65a6e25541e3 815 0 2025-09-12 17:30:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75597c4f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75597c4f66-6cgkl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f0fa40a640 [] [] }} ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.902 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.951 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" HandleID="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Workload="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.951 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" HandleID="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Workload="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034a150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75597c4f66-6cgkl", "timestamp":"2025-09-12 17:31:05.951107887 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.951 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.985 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:05.985 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.052 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.060 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.066 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.069 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.073 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.073 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.075 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.085 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.095 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.096 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" host="localhost" Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.096 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:06.124657 containerd[1514]: 2025-09-12 17:31:06.096 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" HandleID="k8s-pod-network.4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Workload="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.103 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0", GenerateName:"calico-kube-controllers-75597c4f66-", Namespace:"calico-system", SelfLink:"", UID:"45118968-6c3d-4888-bb4b-65a6e25541e3", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75597c4f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75597c4f66-6cgkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f0fa40a640", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.103 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.103 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f0fa40a640 ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.109 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.109 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0", GenerateName:"calico-kube-controllers-75597c4f66-", Namespace:"calico-system", SelfLink:"", UID:"45118968-6c3d-4888-bb4b-65a6e25541e3", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75597c4f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec", Pod:"calico-kube-controllers-75597c4f66-6cgkl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f0fa40a640", MAC:"1a:c4:b6:6e:72:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:06.125458 containerd[1514]: 2025-09-12 17:31:06.121 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" Namespace="calico-system" Pod="calico-kube-controllers-75597c4f66-6cgkl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75597c4f66--6cgkl-eth0" Sep 12 17:31:06.142742 containerd[1514]: time="2025-09-12T17:31:06.142700339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-d9d6j,Uid:46a5a404-015e-432b-aefc-2a536cc9a9bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c\"" Sep 12 17:31:06.161539 containerd[1514]: time="2025-09-12T17:31:06.161480801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.165953 containerd[1514]: time="2025-09-12T17:31:06.165590945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:31:06.167253 containerd[1514]: time="2025-09-12T17:31:06.167096136Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.168989 containerd[1514]: time="2025-09-12T17:31:06.168953715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.170578 containerd[1514]: time="2025-09-12T17:31:06.169753329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.037952043s" Sep 12 17:31:06.170578 containerd[1514]: time="2025-09-12T17:31:06.169794367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:31:06.171966 containerd[1514]: time="2025-09-12T17:31:06.171856499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:31:06.173595 containerd[1514]: time="2025-09-12T17:31:06.173552004Z" level=info msg="connecting to shim 4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec" address="unix:///run/containerd/s/97656505b6b52fa524a9f6559abd44bace7997b4688d0c7cd02bf0a7063d59b5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:06.173779 containerd[1514]: time="2025-09-12T17:31:06.173680479Z" level=info msg="CreateContainer within sandbox \"16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:31:06.180998 containerd[1514]: time="2025-09-12T17:31:06.180956640Z" level=info msg="Container 4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:06.188043 containerd[1514]: time="2025-09-12T17:31:06.187999688Z" level=info msg="CreateContainer within sandbox \"16ad6ed9832e4f708bb3483aa8b669977f4059122ea94f679eed4a510c54cfac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a\"" Sep 12 17:31:06.188947 containerd[1514]: time="2025-09-12T17:31:06.188924538Z" level=info msg="StartContainer for \"4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a\"" Sep 12 17:31:06.189956 containerd[1514]: time="2025-09-12T17:31:06.189924545Z" level=info msg="connecting to shim 4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a" address="unix:///run/containerd/s/1d44a9e3d999170e56862c8df3153901a43c394a8e5abfcdcf7e920625f48f5e" protocol=ttrpc version=3 Sep 12 17:31:06.202738 systemd[1]: Started cri-containerd-4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec.scope - libcontainer container 4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec. Sep 12 17:31:06.208717 systemd[1]: Started cri-containerd-4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a.scope - libcontainer container 4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a. Sep 12 17:31:06.219738 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:06.245474 containerd[1514]: time="2025-09-12T17:31:06.245412639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75597c4f66-6cgkl,Uid:45118968-6c3d-4888-bb4b-65a6e25541e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec\"" Sep 12 17:31:06.257835 containerd[1514]: time="2025-09-12T17:31:06.257730234Z" level=info msg="StartContainer for \"4017ebb14322fd95e70142d43c5b749d7a6ba0ece470e0c56741ba2ab8b4196a\" returns successfully" Sep 12 17:31:06.814162 containerd[1514]: time="2025-09-12T17:31:06.814122088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-vhmbk,Uid:4abd5bc0-e420-4f4a-8dbb-8520b88f50a2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:31:06.988005 systemd-networkd[1461]: calib8e225f2967: Link UP Sep 12 17:31:06.988526 systemd-networkd[1461]: calib8e225f2967: Gained carrier Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.849 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0 calico-apiserver-557c55577d- calico-apiserver 4abd5bc0-e420-4f4a-8dbb-8520b88f50a2 820 0 2025-09-12 17:30:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:557c55577d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-557c55577d-vhmbk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib8e225f2967 [] [] }} ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.849 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.926 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" HandleID="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Workload="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.926 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" HandleID="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Workload="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-557c55577d-vhmbk", "timestamp":"2025-09-12 17:31:06.926648385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.926 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.926 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.926 [INFO][4911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.946 [INFO][4911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.952 [INFO][4911] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.958 [INFO][4911] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.960 [INFO][4911] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.963 [INFO][4911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.963 [INFO][4911] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.964 [INFO][4911] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46 Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.969 [INFO][4911] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.978 [INFO][4911] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.980 [INFO][4911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" host="localhost" Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.980 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:31:07.007194 containerd[1514]: 2025-09-12 17:31:06.980 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" HandleID="k8s-pod-network.139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Workload="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:06.985 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0", GenerateName:"calico-apiserver-557c55577d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4abd5bc0-e420-4f4a-8dbb-8520b88f50a2", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c55577d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-557c55577d-vhmbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e225f2967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:06.985 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:06.985 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8e225f2967 ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:06.989 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:06.989 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0", GenerateName:"calico-apiserver-557c55577d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4abd5bc0-e420-4f4a-8dbb-8520b88f50a2", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557c55577d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46", Pod:"calico-apiserver-557c55577d-vhmbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib8e225f2967", MAC:"52:d0:6b:33:71:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:31:07.007722 containerd[1514]: 2025-09-12 17:31:07.004 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" Namespace="calico-apiserver" Pod="calico-apiserver-557c55577d-vhmbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--557c55577d--vhmbk-eth0" Sep 12 17:31:07.031724 containerd[1514]: time="2025-09-12T17:31:07.031678637Z" level=info msg="connecting to shim 139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46" address="unix:///run/containerd/s/e979106acce263d475df4324485957b330c3ed60825f4c45f21394f64d76db3b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:31:07.042789 kubelet[2670]: I0912 17:31:07.042728 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-557c55577d-bkq8q" podStartSLOduration=27.001298315 podStartE2EDuration="29.041467324s" podCreationTimestamp="2025-09-12 17:30:38 +0000 UTC" firstStartedPulling="2025-09-12 17:31:04.131446898 +0000 UTC m=+41.404702727" lastFinishedPulling="2025-09-12 17:31:06.171615907 +0000 UTC m=+43.444871736" observedRunningTime="2025-09-12 17:31:07.040261122 +0000 UTC m=+44.313516951" watchObservedRunningTime="2025-09-12 17:31:07.041467324 +0000 UTC m=+44.314723153" Sep 12 17:31:07.069762 systemd[1]: Started cri-containerd-139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46.scope - libcontainer container 139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46. Sep 12 17:31:07.082477 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:31:07.114004 containerd[1514]: time="2025-09-12T17:31:07.113963962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557c55577d-vhmbk,Uid:4abd5bc0-e420-4f4a-8dbb-8520b88f50a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46\"" Sep 12 17:31:07.118004 containerd[1514]: time="2025-09-12T17:31:07.117966114Z" level=info msg="CreateContainer within sandbox \"139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:31:07.128551 containerd[1514]: time="2025-09-12T17:31:07.128386620Z" level=info msg="Container 41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:07.138480 containerd[1514]: time="2025-09-12T17:31:07.138432378Z" level=info msg="CreateContainer within sandbox \"139eccb28e531b62c941e3863aa4c5dcccb407efddb6a6e65d83b052f5f4bb46\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860\"" Sep 12 17:31:07.139437 containerd[1514]: time="2025-09-12T17:31:07.139407267Z" level=info msg="StartContainer for \"41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860\"" Sep 12 17:31:07.140625 containerd[1514]: time="2025-09-12T17:31:07.140598309Z" level=info msg="connecting to shim 41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860" address="unix:///run/containerd/s/e979106acce263d475df4324485957b330c3ed60825f4c45f21394f64d76db3b" protocol=ttrpc version=3 Sep 12 17:31:07.167734 systemd[1]: Started cri-containerd-41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860.scope - libcontainer container 41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860. Sep 12 17:31:07.214681 containerd[1514]: time="2025-09-12T17:31:07.214597019Z" level=info msg="StartContainer for \"41fd2094ddab0c1417a61092c7d91a6fcce69eb11603e4d3aa559eb2fbb4f860\" returns successfully" Sep 12 17:31:07.376996 containerd[1514]: time="2025-09-12T17:31:07.376872702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:07.378440 containerd[1514]: time="2025-09-12T17:31:07.378355495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:31:07.379159 containerd[1514]: time="2025-09-12T17:31:07.379129110Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:07.382059 containerd[1514]: time="2025-09-12T17:31:07.381892942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:07.382937 containerd[1514]: time="2025-09-12T17:31:07.382787113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.210882175s" Sep 12 17:31:07.382937 containerd[1514]: time="2025-09-12T17:31:07.382821912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:31:07.384580 containerd[1514]: time="2025-09-12T17:31:07.383824560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:31:07.385391 containerd[1514]: time="2025-09-12T17:31:07.385327512Z" level=info msg="CreateContainer within sandbox \"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:31:07.394670 containerd[1514]: time="2025-09-12T17:31:07.394626214Z" level=info msg="Container 865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:07.401200 containerd[1514]: time="2025-09-12T17:31:07.401149205Z" level=info msg="CreateContainer within sandbox \"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e\"" Sep 12 17:31:07.401826 containerd[1514]: time="2025-09-12T17:31:07.401785665Z" level=info msg="StartContainer for \"865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e\"" Sep 12 17:31:07.403379 containerd[1514]: time="2025-09-12T17:31:07.403338335Z" level=info msg="connecting to shim 865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e" address="unix:///run/containerd/s/33315dc3b572f632fea0aa172ac885867ba0b9a681df833acd5bc0eb294e3730" protocol=ttrpc version=3 Sep 12 17:31:07.427755 systemd[1]: Started cri-containerd-865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e.scope - libcontainer container 865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e. Sep 12 17:31:07.471833 containerd[1514]: time="2025-09-12T17:31:07.471771423Z" level=info msg="StartContainer for \"865e87184707dd38dfa4b66e4d73d492119422f8619ebfdfd152f9e9b7d7650e\" returns successfully" Sep 12 17:31:07.838718 systemd-networkd[1461]: cali2f0fa40a640: Gained IPv6LL Sep 12 17:31:07.902780 systemd-networkd[1461]: calieaf1062e31f: Gained IPv6LL Sep 12 17:31:08.037042 kubelet[2670]: I0912 17:31:08.036832 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:31:08.050294 kubelet[2670]: I0912 17:31:08.050206 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-557c55577d-vhmbk" podStartSLOduration=30.050186422 podStartE2EDuration="30.050186422s" podCreationTimestamp="2025-09-12 17:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:08.048552753 +0000 UTC m=+45.321808582" watchObservedRunningTime="2025-09-12 17:31:08.050186422 +0000 UTC m=+45.323442251" Sep 12 17:31:08.606834 systemd-networkd[1461]: calib8e225f2967: Gained IPv6LL Sep 12 17:31:09.200028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539296312.mount: Deactivated successfully. Sep 12 17:31:09.633304 containerd[1514]: time="2025-09-12T17:31:09.633242873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.634460 containerd[1514]: time="2025-09-12T17:31:09.634242163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:31:09.635446 containerd[1514]: time="2025-09-12T17:31:09.635411167Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.637760 containerd[1514]: time="2025-09-12T17:31:09.637726297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.638508 containerd[1514]: time="2025-09-12T17:31:09.638473914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.253692705s" Sep 12 17:31:09.638508 containerd[1514]: time="2025-09-12T17:31:09.638508673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:31:09.639414 containerd[1514]: time="2025-09-12T17:31:09.639388166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:31:09.640857 containerd[1514]: time="2025-09-12T17:31:09.640828523Z" level=info msg="CreateContainer within sandbox \"3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:31:09.650737 containerd[1514]: time="2025-09-12T17:31:09.649634375Z" level=info msg="Container 3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:09.658989 containerd[1514]: time="2025-09-12T17:31:09.658925173Z" level=info msg="CreateContainer within sandbox \"3dc6351d96baee2bc9e3571ae236ea5a5a75586cfd3602d772dc66dcee66f57c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\"" Sep 12 17:31:09.659508 containerd[1514]: time="2025-09-12T17:31:09.659479237Z" level=info msg="StartContainer for \"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\"" Sep 12 17:31:09.660795 containerd[1514]: time="2025-09-12T17:31:09.660766478Z" level=info msg="connecting to shim 3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0" address="unix:///run/containerd/s/6d6bc322fc7e8d73480990aa83f292719944f27dda58b3fe0d8f12605f07e6ca" protocol=ttrpc version=3 Sep 12 17:31:09.686787 systemd[1]: Started cri-containerd-3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0.scope - libcontainer container 3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0. Sep 12 17:31:09.730719 containerd[1514]: time="2025-09-12T17:31:09.730616438Z" level=info msg="StartContainer for \"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\" returns successfully" Sep 12 17:31:10.058406 kubelet[2670]: I0912 17:31:10.058337 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-d9d6j" podStartSLOduration=25.563383372 podStartE2EDuration="29.058316297s" podCreationTimestamp="2025-09-12 17:30:41 +0000 UTC" firstStartedPulling="2025-09-12 17:31:06.144304886 +0000 UTC m=+43.417560715" lastFinishedPulling="2025-09-12 17:31:09.639237851 +0000 UTC m=+46.912493640" observedRunningTime="2025-09-12 17:31:10.05719845 +0000 UTC m=+47.330454279" watchObservedRunningTime="2025-09-12 17:31:10.058316297 +0000 UTC m=+47.331572126" Sep 12 17:31:10.685858 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:34590.service - OpenSSH per-connection server daemon (10.0.0.1:34590). Sep 12 17:31:10.786886 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:10.791861 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:10.801367 systemd-logind[1496]: New session 9 of user core. Sep 12 17:31:10.807787 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:31:10.821622 containerd[1514]: time="2025-09-12T17:31:10.818259399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\" id:\"e2ec67f81229d12ab498dbfda16068ce5ff77956fda212d1dc1646b913824769\" pid:5122 exit_status:1 exited_at:{seconds:1757698270 nanos:817105473}" Sep 12 17:31:10.941038 containerd[1514]: time="2025-09-12T17:31:10.940902814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\" id:\"7e1e776169f7496f3d7da2f0f0526236220044b2bcdf036fb60aca9f76d39be3\" pid:5150 exit_status:1 exited_at:{seconds:1757698270 nanos:940191555}" Sep 12 17:31:11.114472 sshd[5137]: Connection closed by 10.0.0.1 port 34590 Sep 12 17:31:11.115088 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:11.120279 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:34590.service: Deactivated successfully. Sep 12 17:31:11.120330 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:31:11.122889 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:31:11.126501 systemd-logind[1496]: Removed session 9. Sep 12 17:31:11.163785 containerd[1514]: time="2025-09-12T17:31:11.163724475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\" id:\"47f461807f20106d54eb8843c48d19d424c0f7ccd8d4e7b377ac104338cc6e82\" pid:5188 exit_status:1 exited_at:{seconds:1757698271 nanos:163339286}" Sep 12 17:31:11.623631 containerd[1514]: time="2025-09-12T17:31:11.623502763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.624402 containerd[1514]: time="2025-09-12T17:31:11.624365258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:31:11.625828 containerd[1514]: time="2025-09-12T17:31:11.625767538Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.629510 containerd[1514]: time="2025-09-12T17:31:11.629411833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:11.630543 containerd[1514]: time="2025-09-12T17:31:11.630502721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.991078596s" Sep 12 17:31:11.630640 containerd[1514]: time="2025-09-12T17:31:11.630553440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:31:11.631461 containerd[1514]: time="2025-09-12T17:31:11.631433695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:31:11.642216 containerd[1514]: time="2025-09-12T17:31:11.642175226Z" level=info msg="CreateContainer within sandbox \"4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:31:11.650996 containerd[1514]: time="2025-09-12T17:31:11.650851416Z" level=info msg="Container e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:11.660412 containerd[1514]: time="2025-09-12T17:31:11.660343543Z" level=info msg="CreateContainer within sandbox \"4317d7a6097d8086c54f34841aa00946953a1cf3da3c654d0e218ca2781d47ec\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c\"" Sep 12 17:31:11.661129 containerd[1514]: time="2025-09-12T17:31:11.661099361Z" level=info msg="StartContainer for \"e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c\"" Sep 12 17:31:11.662805 containerd[1514]: time="2025-09-12T17:31:11.662752513Z" level=info msg="connecting to shim e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c" address="unix:///run/containerd/s/97656505b6b52fa524a9f6559abd44bace7997b4688d0c7cd02bf0a7063d59b5" protocol=ttrpc version=3 Sep 12 17:31:11.683751 systemd[1]: Started cri-containerd-e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c.scope - libcontainer container e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c. Sep 12 17:31:11.732646 containerd[1514]: time="2025-09-12T17:31:11.732602023Z" level=info msg="StartContainer for \"e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c\" returns successfully" Sep 12 17:31:12.138907 containerd[1514]: time="2025-09-12T17:31:12.138775397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f74f8d35db5471abd735744792496924a36e79287fa7619b98fd44b8b9928d0\" id:\"577234165ffaa747de5df66d604902b144268b40c0dc6ca5c234557c390b9692\" pid:5258 exit_status:1 exited_at:{seconds:1757698272 nanos:138352609}" Sep 12 17:31:12.837343 containerd[1514]: time="2025-09-12T17:31:12.837279377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.838852 containerd[1514]: time="2025-09-12T17:31:12.838799094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:31:12.839968 containerd[1514]: time="2025-09-12T17:31:12.839941902Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.842197 containerd[1514]: time="2025-09-12T17:31:12.841902167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:12.843778 containerd[1514]: time="2025-09-12T17:31:12.843738516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.212250943s" Sep 12 17:31:12.843936 containerd[1514]: time="2025-09-12T17:31:12.843918351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:31:12.847406 containerd[1514]: time="2025-09-12T17:31:12.847363214Z" level=info msg="CreateContainer within sandbox \"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:31:12.870989 containerd[1514]: time="2025-09-12T17:31:12.869863824Z" level=info msg="Container b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:31:12.888765 containerd[1514]: time="2025-09-12T17:31:12.888659657Z" level=info msg="CreateContainer within sandbox \"ecffdfb375bf146da117a1105b7f777bf928e3d6ff98ab61ed161d9257b70908\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a\"" Sep 12 17:31:12.891749 containerd[1514]: time="2025-09-12T17:31:12.891635053Z" level=info msg="StartContainer for \"b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a\"" Sep 12 17:31:12.893578 containerd[1514]: time="2025-09-12T17:31:12.893493641Z" level=info msg="connecting to shim b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a" address="unix:///run/containerd/s/33315dc3b572f632fea0aa172ac885867ba0b9a681df833acd5bc0eb294e3730" protocol=ttrpc version=3 Sep 12 17:31:12.929761 systemd[1]: Started cri-containerd-b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a.scope - libcontainer container b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a. Sep 12 17:31:13.013422 containerd[1514]: time="2025-09-12T17:31:13.013365610Z" level=info msg="StartContainer for \"b62db8337b3bc14538159eed78c4bab4da5c029b85e816f8156ce54911ad8a7a\" returns successfully" Sep 12 17:31:13.100537 kubelet[2670]: I0912 17:31:13.100349 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9gw4k" podStartSLOduration=23.344358827 podStartE2EDuration="31.100329756s" podCreationTimestamp="2025-09-12 17:30:42 +0000 UTC" firstStartedPulling="2025-09-12 17:31:05.088923755 +0000 UTC m=+42.362179584" lastFinishedPulling="2025-09-12 17:31:12.844894684 +0000 UTC m=+50.118150513" observedRunningTime="2025-09-12 17:31:13.099505298 +0000 UTC m=+50.372761127" watchObservedRunningTime="2025-09-12 17:31:13.100329756 +0000 UTC m=+50.373585665" Sep 12 17:31:13.101117 kubelet[2670]: I0912 17:31:13.100855 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75597c4f66-6cgkl" podStartSLOduration=25.716376596 podStartE2EDuration="31.100834022s" podCreationTimestamp="2025-09-12 17:30:42 +0000 UTC" firstStartedPulling="2025-09-12 17:31:06.246822273 +0000 UTC m=+43.520078102" lastFinishedPulling="2025-09-12 17:31:11.631279699 +0000 UTC m=+48.904535528" observedRunningTime="2025-09-12 17:31:12.093512586 +0000 UTC m=+49.366768455" watchObservedRunningTime="2025-09-12 17:31:13.100834022 +0000 UTC m=+50.374089851" Sep 12 17:31:13.116877 containerd[1514]: time="2025-09-12T17:31:13.116821505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e02cee21971fda7b219d6f45a158a3992b6520494a95925890fc8a90e0669f9c\" id:\"fa5f8557b74d0a6e879ef195d1731f92102e1f30867a251e1b3987ccf1e2e279\" pid:5321 exited_at:{seconds:1757698273 nanos:116467155}" Sep 12 17:31:13.911198 kubelet[2670]: I0912 17:31:13.911068 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:31:13.911198 kubelet[2670]: I0912 17:31:13.911132 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:31:16.131349 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:34604.service - OpenSSH per-connection server daemon (10.0.0.1:34604). Sep 12 17:31:16.217769 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 34604 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:16.221826 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:16.235040 systemd-logind[1496]: New session 10 of user core. Sep 12 17:31:16.247705 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:31:16.530980 sshd[5335]: Connection closed by 10.0.0.1 port 34604 Sep 12 17:31:16.531516 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:16.546150 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:34604.service: Deactivated successfully. Sep 12 17:31:16.548656 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:31:16.551167 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:31:16.556136 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:34618.service - OpenSSH per-connection server daemon (10.0.0.1:34618). Sep 12 17:31:16.558784 systemd-logind[1496]: Removed session 10. Sep 12 17:31:16.620175 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:16.621651 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:16.626018 systemd-logind[1496]: New session 11 of user core. Sep 12 17:31:16.636779 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:31:16.885507 sshd[5352]: Connection closed by 10.0.0.1 port 34618 Sep 12 17:31:16.886501 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:16.898127 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:34618.service: Deactivated successfully. Sep 12 17:31:16.901305 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:31:16.902685 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:31:16.907614 systemd-logind[1496]: Removed session 11. Sep 12 17:31:16.910620 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). Sep 12 17:31:16.972287 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:16.973804 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:16.977779 systemd-logind[1496]: New session 12 of user core. Sep 12 17:31:16.987763 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:31:17.139378 sshd[5371]: Connection closed by 10.0.0.1 port 34634 Sep 12 17:31:17.139767 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:17.143554 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:34634.service: Deactivated successfully. Sep 12 17:31:17.145761 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:31:17.148471 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:31:17.151639 systemd-logind[1496]: Removed session 12. Sep 12 17:31:22.154856 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:45060.service - OpenSSH per-connection server daemon (10.0.0.1:45060). Sep 12 17:31:22.227235 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 45060 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:22.229443 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:22.237716 systemd-logind[1496]: New session 13 of user core. Sep 12 17:31:22.248820 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:31:22.421025 sshd[5397]: Connection closed by 10.0.0.1 port 45060 Sep 12 17:31:22.423001 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:22.431865 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:45060.service: Deactivated successfully. Sep 12 17:31:22.433799 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:31:22.434483 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:31:22.436717 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:45066.service - OpenSSH per-connection server daemon (10.0.0.1:45066). Sep 12 17:31:22.438239 systemd-logind[1496]: Removed session 13. Sep 12 17:31:22.507737 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 45066 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:22.509061 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:22.516020 systemd-logind[1496]: New session 14 of user core. Sep 12 17:31:22.525760 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:31:22.736408 sshd[5413]: Connection closed by 10.0.0.1 port 45066 Sep 12 17:31:22.736911 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:22.750777 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:45066.service: Deactivated successfully. Sep 12 17:31:22.752652 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:31:22.753953 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:31:22.755952 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:45070.service - OpenSSH per-connection server daemon (10.0.0.1:45070). Sep 12 17:31:22.757021 systemd-logind[1496]: Removed session 14. Sep 12 17:31:22.832567 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 45070 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:22.833951 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:22.838139 systemd-logind[1496]: New session 15 of user core. Sep 12 17:31:22.843698 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:31:23.555398 sshd[5429]: Connection closed by 10.0.0.1 port 45070 Sep 12 17:31:23.556893 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:23.567458 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:45070.service: Deactivated successfully. Sep 12 17:31:23.570598 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:31:23.571653 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:31:23.579146 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:45078.service - OpenSSH per-connection server daemon (10.0.0.1:45078). Sep 12 17:31:23.580659 systemd-logind[1496]: Removed session 15. Sep 12 17:31:23.646279 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 45078 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:23.647867 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:23.652550 systemd-logind[1496]: New session 16 of user core. Sep 12 17:31:23.658729 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:31:24.071283 sshd[5450]: Connection closed by 10.0.0.1 port 45078 Sep 12 17:31:24.072052 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:24.091429 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:45078.service: Deactivated successfully. Sep 12 17:31:24.095339 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:31:24.097797 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:31:24.099971 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). Sep 12 17:31:24.101792 systemd-logind[1496]: Removed session 16. Sep 12 17:31:24.173387 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:24.174861 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:24.180042 systemd-logind[1496]: New session 17 of user core. Sep 12 17:31:24.186719 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:31:24.327918 sshd[5465]: Connection closed by 10.0.0.1 port 45080 Sep 12 17:31:24.328365 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:24.331880 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:31:24.332069 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:45080.service: Deactivated successfully. Sep 12 17:31:24.333876 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:31:24.335045 systemd-logind[1496]: Removed session 17. Sep 12 17:31:28.095057 containerd[1514]: time="2025-09-12T17:31:28.094947616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a50838bb513428a705c4824e9ffda6b002e58ae904f1375283e28a712937299\" id:\"f8c94110f6046415aa8337250df3c76a167e190d4d2fd643bbc74e9aedcd1e4f\" pid:5492 exited_at:{seconds:1757698288 nanos:94567784}" Sep 12 17:31:29.345626 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:45082.service - OpenSSH per-connection server daemon (10.0.0.1:45082). Sep 12 17:31:29.440050 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 45082 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:29.441716 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:29.447241 systemd-logind[1496]: New session 18 of user core. Sep 12 17:31:29.463789 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:31:29.638681 sshd[5510]: Connection closed by 10.0.0.1 port 45082 Sep 12 17:31:29.638699 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:29.642487 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:45082.service: Deactivated successfully. Sep 12 17:31:29.644450 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:31:29.645340 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:31:29.647096 systemd-logind[1496]: Removed session 18. Sep 12 17:31:33.848294 kubelet[2670]: I0912 17:31:33.848211 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:31:34.654353 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:51600.service - OpenSSH per-connection server daemon (10.0.0.1:51600). Sep 12 17:31:34.721928 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 51600 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:34.723387 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:34.730913 systemd-logind[1496]: New session 19 of user core. Sep 12 17:31:34.740759 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:31:34.892689 sshd[5536]: Connection closed by 10.0.0.1 port 51600 Sep 12 17:31:34.893040 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:34.897346 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:51600.service: Deactivated successfully. Sep 12 17:31:34.901161 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:31:34.902119 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:31:34.905110 systemd-logind[1496]: Removed session 19. Sep 12 17:31:36.812303 kubelet[2670]: E0912 17:31:36.812203 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:39.912169 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:51616.service - OpenSSH per-connection server daemon (10.0.0.1:51616). Sep 12 17:31:39.998647 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 51616 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:40.000500 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:40.005162 systemd-logind[1496]: New session 20 of user core. Sep 12 17:31:40.012742 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:31:40.213355 sshd[5559]: Connection closed by 10.0.0.1 port 51616 Sep 12 17:31:40.213774 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:40.217591 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:51616.service: Deactivated successfully. Sep 12 17:31:40.219477 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:31:40.221858 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:31:40.223966 systemd-logind[1496]: Removed session 20. Sep 12 17:31:40.813215 kubelet[2670]: E0912 17:31:40.813123 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"