Sep 9 21:29:47.739904 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 21:29:47.739925 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 19:54:20 -00 2025 Sep 9 21:29:47.739934 kernel: KASLR enabled Sep 9 21:29:47.739939 kernel: efi: EFI v2.7 by EDK II Sep 9 21:29:47.739945 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 21:29:47.739950 kernel: random: crng init done Sep 9 21:29:47.739956 kernel: secureboot: Secure boot disabled Sep 9 21:29:47.739962 kernel: ACPI: Early table checksum verification disabled Sep 9 21:29:47.739968 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 21:29:47.739975 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 21:29:47.739980 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.739986 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.739992 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.739997 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740004 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740012 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740018 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740024 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740030 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:29:47.740036 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 21:29:47.740042 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 21:29:47.740048 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:29:47.740054 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 21:29:47.740060 kernel: Zone ranges: Sep 9 21:29:47.740066 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:29:47.740073 kernel: DMA32 empty Sep 9 21:29:47.740079 kernel: Normal empty Sep 9 21:29:47.740085 kernel: Device empty Sep 9 21:29:47.740091 kernel: Movable zone start for each node Sep 9 21:29:47.740096 kernel: Early memory node ranges Sep 9 21:29:47.740102 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 21:29:47.740108 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 21:29:47.740114 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 21:29:47.740120 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 21:29:47.740126 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 21:29:47.740132 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 21:29:47.740138 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 21:29:47.740145 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 21:29:47.740151 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 21:29:47.740158 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 21:29:47.740166 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 21:29:47.740173 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 21:29:47.740179 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 21:29:47.740187 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:29:47.740193 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 21:29:47.740199 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 21:29:47.740206 kernel: psci: probing for conduit method from ACPI. Sep 9 21:29:47.740212 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 21:29:47.740218 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 21:29:47.740225 kernel: psci: Trusted OS migration not required Sep 9 21:29:47.740231 kernel: psci: SMC Calling Convention v1.1 Sep 9 21:29:47.740237 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 21:29:47.740244 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 21:29:47.740252 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 21:29:47.740258 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 21:29:47.740265 kernel: Detected PIPT I-cache on CPU0 Sep 9 21:29:47.740381 kernel: CPU features: detected: GIC system register CPU interface Sep 9 21:29:47.740389 kernel: CPU features: detected: Spectre-v4 Sep 9 21:29:47.740395 kernel: CPU features: detected: Spectre-BHB Sep 9 21:29:47.740401 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 21:29:47.740408 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 21:29:47.740415 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 21:29:47.740421 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 21:29:47.740427 kernel: alternatives: applying boot alternatives Sep 9 21:29:47.740435 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:29:47.740445 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:29:47.740452 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 21:29:47.740459 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:29:47.740465 kernel: Fallback order for Node 0: 0 Sep 9 21:29:47.740472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 21:29:47.740478 kernel: Policy zone: DMA Sep 9 21:29:47.740485 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:29:47.740491 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 21:29:47.740498 kernel: software IO TLB: area num 4. Sep 9 21:29:47.740504 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 21:29:47.740511 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 21:29:47.740519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 21:29:47.740526 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:29:47.740533 kernel: rcu: RCU event tracing is enabled. Sep 9 21:29:47.740540 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 21:29:47.740547 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:29:47.740553 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:29:47.740560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:29:47.740567 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 21:29:47.740574 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:29:47.740580 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:29:47.740587 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 21:29:47.740595 kernel: GICv3: 256 SPIs implemented Sep 9 21:29:47.740602 kernel: GICv3: 0 Extended SPIs implemented Sep 9 21:29:47.740608 kernel: Root IRQ handler: gic_handle_irq Sep 9 21:29:47.740614 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 21:29:47.740621 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 21:29:47.740627 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 21:29:47.740634 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 21:29:47.740641 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 21:29:47.740647 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 21:29:47.740654 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 21:29:47.740661 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 21:29:47.740668 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 21:29:47.740675 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:29:47.740682 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 21:29:47.740689 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 21:29:47.740695 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 21:29:47.740702 kernel: arm-pv: using stolen time PV Sep 9 21:29:47.740708 kernel: Console: colour dummy device 80x25 Sep 9 21:29:47.740715 kernel: ACPI: Core revision 20240827 Sep 9 21:29:47.740722 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 21:29:47.740729 kernel: pid_max: default: 32768 minimum: 301 Sep 9 21:29:47.740735 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:29:47.740743 kernel: landlock: Up and running. Sep 9 21:29:47.740750 kernel: SELinux: Initializing. Sep 9 21:29:47.740756 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:29:47.740763 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:29:47.740770 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:29:47.740776 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:29:47.740783 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 21:29:47.740790 kernel: Remapping and enabling EFI services. Sep 9 21:29:47.740797 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:29:47.740808 kernel: Detected PIPT I-cache on CPU1 Sep 9 21:29:47.740815 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 21:29:47.740822 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 21:29:47.740830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:29:47.740837 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 21:29:47.740844 kernel: Detected PIPT I-cache on CPU2 Sep 9 21:29:47.740851 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 21:29:47.740858 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 21:29:47.740866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:29:47.740873 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 21:29:47.740880 kernel: Detected PIPT I-cache on CPU3 Sep 9 21:29:47.740887 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 21:29:47.740894 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 21:29:47.740901 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:29:47.740908 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 21:29:47.740914 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 21:29:47.740921 kernel: SMP: Total of 4 processors activated. Sep 9 21:29:47.740929 kernel: CPU: All CPU(s) started at EL1 Sep 9 21:29:47.740936 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 21:29:47.740943 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 21:29:47.740950 kernel: CPU features: detected: Common not Private translations Sep 9 21:29:47.740957 kernel: CPU features: detected: CRC32 instructions Sep 9 21:29:47.740964 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 21:29:47.740970 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 21:29:47.740977 kernel: CPU features: detected: LSE atomic instructions Sep 9 21:29:47.740984 kernel: CPU features: detected: Privileged Access Never Sep 9 21:29:47.741007 kernel: CPU features: detected: RAS Extension Support Sep 9 21:29:47.741014 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 21:29:47.741020 kernel: alternatives: applying system-wide alternatives Sep 9 21:29:47.741027 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 21:29:47.741035 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 21:29:47.741041 kernel: devtmpfs: initialized Sep 9 21:29:47.741048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:29:47.741055 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 21:29:47.741062 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 21:29:47.741071 kernel: 0 pages in range for non-PLT usage Sep 9 21:29:47.741078 kernel: 508560 pages in range for PLT usage Sep 9 21:29:47.741084 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:29:47.741091 kernel: SMBIOS 3.0.0 present. Sep 9 21:29:47.741098 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 21:29:47.741105 kernel: DMI: Memory slots populated: 1/1 Sep 9 21:29:47.741112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:29:47.741119 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 21:29:47.741126 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 21:29:47.741135 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 21:29:47.741142 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:29:47.741149 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 9 21:29:47.741156 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:29:47.741162 kernel: cpuidle: using governor menu Sep 9 21:29:47.741169 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 21:29:47.741176 kernel: ASID allocator initialised with 32768 entries Sep 9 21:29:47.741183 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:29:47.741190 kernel: Serial: AMBA PL011 UART driver Sep 9 21:29:47.741198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:29:47.741205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:29:47.741212 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 21:29:47.741219 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 21:29:47.741226 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:29:47.741233 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:29:47.741239 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 21:29:47.741246 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 21:29:47.741253 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:29:47.741261 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:29:47.741277 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:29:47.741284 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:29:47.741301 kernel: ACPI: Interpreter enabled Sep 9 21:29:47.741308 kernel: ACPI: Using GIC for interrupt routing Sep 9 21:29:47.741315 kernel: ACPI: MCFG table detected, 1 entries Sep 9 21:29:47.741322 kernel: ACPI: CPU0 has been hot-added Sep 9 21:29:47.741334 kernel: ACPI: CPU1 has been hot-added Sep 9 21:29:47.741342 kernel: ACPI: CPU2 has been hot-added Sep 9 21:29:47.741349 kernel: ACPI: CPU3 has been hot-added Sep 9 21:29:47.741358 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 21:29:47.741365 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 21:29:47.741372 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 21:29:47.741497 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:29:47.741561 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 21:29:47.741619 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 21:29:47.741676 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 21:29:47.741735 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 21:29:47.741744 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 21:29:47.741751 kernel: PCI host bridge to bus 0000:00 Sep 9 21:29:47.741813 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 21:29:47.741867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 21:29:47.741918 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 21:29:47.741969 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 21:29:47.742044 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:29:47.742117 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 21:29:47.742179 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 21:29:47.742239 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 21:29:47.742323 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 21:29:47.742394 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 21:29:47.742453 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 21:29:47.742515 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 21:29:47.742569 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 21:29:47.742622 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 21:29:47.742676 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 21:29:47.742685 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 21:29:47.742692 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 21:29:47.742699 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 21:29:47.742708 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 21:29:47.742715 kernel: iommu: Default domain type: Translated Sep 9 21:29:47.742722 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 21:29:47.742728 kernel: efivars: Registered efivars operations Sep 9 21:29:47.742735 kernel: vgaarb: loaded Sep 9 21:29:47.742742 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 21:29:47.742749 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:29:47.742756 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:29:47.742763 kernel: pnp: PnP ACPI init Sep 9 21:29:47.742827 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 21:29:47.742837 kernel: pnp: PnP ACPI: found 1 devices Sep 9 21:29:47.742844 kernel: NET: Registered PF_INET protocol family Sep 9 21:29:47.742851 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 21:29:47.742858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 21:29:47.742865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:29:47.742872 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:29:47.742879 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 21:29:47.742887 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 21:29:47.742895 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:29:47.742902 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:29:47.742909 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:29:47.742915 kernel: PCI: CLS 0 bytes, default 64 Sep 9 21:29:47.742922 kernel: kvm [1]: HYP mode not available Sep 9 21:29:47.742929 kernel: Initialise system trusted keyrings Sep 9 21:29:47.742936 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 21:29:47.742943 kernel: Key type asymmetric registered Sep 9 21:29:47.742951 kernel: Asymmetric key parser 'x509' registered Sep 9 21:29:47.742958 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 21:29:47.742965 kernel: io scheduler mq-deadline registered Sep 9 21:29:47.742972 kernel: io scheduler kyber registered Sep 9 21:29:47.742979 kernel: io scheduler bfq registered Sep 9 21:29:47.742986 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 21:29:47.742993 kernel: ACPI: button: Power Button [PWRB] Sep 9 21:29:47.743000 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 21:29:47.743058 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 21:29:47.743068 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:29:47.743075 kernel: thunder_xcv, ver 1.0 Sep 9 21:29:47.743082 kernel: thunder_bgx, ver 1.0 Sep 9 21:29:47.743089 kernel: nicpf, ver 1.0 Sep 9 21:29:47.743096 kernel: nicvf, ver 1.0 Sep 9 21:29:47.743159 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 21:29:47.743215 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T21:29:47 UTC (1757453387) Sep 9 21:29:47.743224 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 21:29:47.743232 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 21:29:47.743239 kernel: watchdog: NMI not fully supported Sep 9 21:29:47.743246 kernel: watchdog: Hard watchdog permanently disabled Sep 9 21:29:47.743253 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:29:47.743260 kernel: Segment Routing with IPv6 Sep 9 21:29:47.743275 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:29:47.743293 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:29:47.743300 kernel: Key type dns_resolver registered Sep 9 21:29:47.743307 kernel: registered taskstats version 1 Sep 9 21:29:47.743314 kernel: Loading compiled-in X.509 certificates Sep 9 21:29:47.743323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f5007e8dd2a6cc57a1fe19052a0aaf9985861c4d' Sep 9 21:29:47.743335 kernel: Demotion targets for Node 0: null Sep 9 21:29:47.743343 kernel: Key type .fscrypt registered Sep 9 21:29:47.743349 kernel: Key type fscrypt-provisioning registered Sep 9 21:29:47.743356 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:29:47.743363 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:29:47.743370 kernel: ima: No architecture policies found Sep 9 21:29:47.743377 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 21:29:47.743385 kernel: clk: Disabling unused clocks Sep 9 21:29:47.743392 kernel: PM: genpd: Disabling unused power domains Sep 9 21:29:47.743399 kernel: Warning: unable to open an initial console. Sep 9 21:29:47.743406 kernel: Freeing unused kernel memory: 38976K Sep 9 21:29:47.743413 kernel: Run /init as init process Sep 9 21:29:47.743419 kernel: with arguments: Sep 9 21:29:47.743426 kernel: /init Sep 9 21:29:47.743433 kernel: with environment: Sep 9 21:29:47.743439 kernel: HOME=/ Sep 9 21:29:47.743447 kernel: TERM=linux Sep 9 21:29:47.743454 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:29:47.743462 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:29:47.743472 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:29:47.743480 systemd[1]: Detected virtualization kvm. Sep 9 21:29:47.743487 systemd[1]: Detected architecture arm64. Sep 9 21:29:47.743494 systemd[1]: Running in initrd. Sep 9 21:29:47.743501 systemd[1]: No hostname configured, using default hostname. Sep 9 21:29:47.743510 systemd[1]: Hostname set to . Sep 9 21:29:47.743517 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:29:47.743524 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:29:47.743532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:29:47.743539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:29:47.743547 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:29:47.743555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:29:47.743562 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:29:47.743572 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:29:47.743580 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:29:47.743587 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:29:47.743595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:29:47.743602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:29:47.743610 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:29:47.743618 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:29:47.743625 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:29:47.743633 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:29:47.743640 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:29:47.743647 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:29:47.743655 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:29:47.743662 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:29:47.743670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:29:47.743677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:29:47.743685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:29:47.743693 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:29:47.743700 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:29:47.743707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:29:47.743715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:29:47.743722 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:29:47.743730 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:29:47.743737 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:29:47.743744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:29:47.743753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:29:47.743760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:29:47.743768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:29:47.743776 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:29:47.743799 systemd-journald[245]: Collecting audit messages is disabled. Sep 9 21:29:47.743817 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:29:47.743825 systemd-journald[245]: Journal started Sep 9 21:29:47.743844 systemd-journald[245]: Runtime Journal (/run/log/journal/76a50685f9cd45b397d58e015b74c999) is 6M, max 48.5M, 42.4M free. Sep 9 21:29:47.749582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:29:47.749606 kernel: Bridge firewalling registered Sep 9 21:29:47.749615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:29:47.735298 systemd-modules-load[246]: Inserted module 'overlay' Sep 9 21:29:47.748578 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 9 21:29:47.753172 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:29:47.754528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:29:47.757539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:29:47.759053 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:29:47.760888 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:29:47.765383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:29:47.767664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:29:47.773840 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:29:47.774520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:29:47.776553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:29:47.779761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:29:47.784793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:29:47.786709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:29:47.799724 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:29:47.813766 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:29:47.826737 systemd-resolved[288]: Positive Trust Anchors: Sep 9 21:29:47.826756 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:29:47.826787 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:29:47.831429 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 9 21:29:47.832346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:29:47.834672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:29:47.879300 kernel: SCSI subsystem initialized Sep 9 21:29:47.883289 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:29:47.892310 kernel: iscsi: registered transport (tcp) Sep 9 21:29:47.903300 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:29:47.903315 kernel: QLogic iSCSI HBA Driver Sep 9 21:29:47.917941 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:29:47.938311 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:29:47.940174 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:29:47.981970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:29:47.983959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:29:48.042304 kernel: raid6: neonx8 gen() 15761 MB/s Sep 9 21:29:48.059302 kernel: raid6: neonx4 gen() 15783 MB/s Sep 9 21:29:48.076289 kernel: raid6: neonx2 gen() 13145 MB/s Sep 9 21:29:48.093285 kernel: raid6: neonx1 gen() 10426 MB/s Sep 9 21:29:48.110293 kernel: raid6: int64x8 gen() 6881 MB/s Sep 9 21:29:48.127289 kernel: raid6: int64x4 gen() 7328 MB/s Sep 9 21:29:48.144289 kernel: raid6: int64x2 gen() 6084 MB/s Sep 9 21:29:48.161286 kernel: raid6: int64x1 gen() 5037 MB/s Sep 9 21:29:48.161300 kernel: raid6: using algorithm neonx4 gen() 15783 MB/s Sep 9 21:29:48.178293 kernel: raid6: .... xor() 12284 MB/s, rmw enabled Sep 9 21:29:48.178318 kernel: raid6: using neon recovery algorithm Sep 9 21:29:48.183291 kernel: xor: measuring software checksum speed Sep 9 21:29:48.183325 kernel: 8regs : 21613 MB/sec Sep 9 21:29:48.183351 kernel: 32regs : 20152 MB/sec Sep 9 21:29:48.184653 kernel: arm64_neon : 28109 MB/sec Sep 9 21:29:48.184665 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 9 21:29:48.235301 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:29:48.241258 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:29:48.243640 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:29:48.267963 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 9 21:29:48.272008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:29:48.274173 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:29:48.298164 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 9 21:29:48.318973 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:29:48.320772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:29:48.368297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:29:48.370758 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:29:48.410755 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 21:29:48.410884 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 21:29:48.417736 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 21:29:48.417766 kernel: GPT:9289727 != 19775487 Sep 9 21:29:48.417775 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 21:29:48.417784 kernel: GPT:9289727 != 19775487 Sep 9 21:29:48.418468 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 21:29:48.418491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:29:48.418992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:29:48.420069 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:29:48.422819 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:29:48.425202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:29:48.444896 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 21:29:48.454338 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:29:48.456251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:29:48.469705 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 21:29:48.475609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 21:29:48.476627 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 21:29:48.485166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:29:48.486180 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:29:48.487889 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:29:48.489640 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:29:48.491938 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:29:48.493528 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:29:48.509709 disk-uuid[591]: Primary Header is updated. Sep 9 21:29:48.509709 disk-uuid[591]: Secondary Entries is updated. Sep 9 21:29:48.509709 disk-uuid[591]: Secondary Header is updated. Sep 9 21:29:48.514407 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:29:48.513323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:29:49.521295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:29:49.521605 disk-uuid[594]: The operation has completed successfully. Sep 9 21:29:49.546617 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:29:49.547495 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:29:49.570544 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:29:49.588092 sh[611]: Success Sep 9 21:29:49.599406 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:29:49.599440 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:29:49.600284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:29:49.607289 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 21:29:49.629975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:29:49.632511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:29:49.650355 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:29:49.654289 kernel: BTRFS: device fsid 0420e954-c3c6-4e24-9a07-863b2151b564 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (623) Sep 9 21:29:49.655898 kernel: BTRFS info (device dm-0): first mount of filesystem 0420e954-c3c6-4e24-9a07-863b2151b564 Sep 9 21:29:49.655914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:29:49.659459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:29:49.659477 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:29:49.660398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:29:49.661436 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:29:49.662641 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 21:29:49.663253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:29:49.664585 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:29:49.686317 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 9 21:29:49.688282 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:29:49.688325 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:29:49.690399 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:29:49.690431 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:29:49.694287 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:29:49.695400 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:29:49.697436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:29:49.766894 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:29:49.770015 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:29:49.807326 ignition[693]: Ignition 2.22.0 Sep 9 21:29:49.807934 ignition[693]: Stage: fetch-offline Sep 9 21:29:49.807979 ignition[693]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:49.808446 systemd-networkd[805]: lo: Link UP Sep 9 21:29:49.807987 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:49.808449 systemd-networkd[805]: lo: Gained carrier Sep 9 21:29:49.808060 ignition[693]: parsed url from cmdline: "" Sep 9 21:29:49.809082 systemd-networkd[805]: Enumeration completed Sep 9 21:29:49.808063 ignition[693]: no config URL provided Sep 9 21:29:49.809505 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:29:49.808067 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:29:49.809809 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:29:49.808076 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:29:49.809813 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:29:49.808094 ignition[693]: op(1): [started] loading QEMU firmware config module Sep 9 21:29:49.810909 systemd-networkd[805]: eth0: Link UP Sep 9 21:29:49.808098 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 21:29:49.811193 systemd[1]: Reached target network.target - Network. Sep 9 21:29:49.815147 ignition[693]: op(1): [finished] loading QEMU firmware config module Sep 9 21:29:49.811232 systemd-networkd[805]: eth0: Gained carrier Sep 9 21:29:49.811241 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:29:49.824313 systemd-networkd[805]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:29:49.864785 ignition[693]: parsing config with SHA512: 535ddeb506937ec5276eb9db048da37d4c4e6e55cfdf9c2ec044e5335b1d4feb7427002914c5b7ba8ce4cefb06c7493bc5af7534c718c07109758e971b614511 Sep 9 21:29:49.870120 unknown[693]: fetched base config from "system" Sep 9 21:29:49.870132 unknown[693]: fetched user config from "qemu" Sep 9 21:29:49.870560 ignition[693]: fetch-offline: fetch-offline passed Sep 9 21:29:49.870611 ignition[693]: Ignition finished successfully Sep 9 21:29:49.872829 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:29:49.874404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:29:49.875066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:29:49.911964 ignition[813]: Ignition 2.22.0 Sep 9 21:29:49.911983 ignition[813]: Stage: kargs Sep 9 21:29:49.912098 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:49.912107 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:49.912853 ignition[813]: kargs: kargs passed Sep 9 21:29:49.912894 ignition[813]: Ignition finished successfully Sep 9 21:29:49.916847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:29:49.918961 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:29:49.956486 ignition[821]: Ignition 2.22.0 Sep 9 21:29:49.956503 ignition[821]: Stage: disks Sep 9 21:29:49.956622 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:49.956631 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:49.957338 ignition[821]: disks: disks passed Sep 9 21:29:49.957380 ignition[821]: Ignition finished successfully Sep 9 21:29:49.961002 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:29:49.962394 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:29:49.963711 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:29:49.965251 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:29:49.966880 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:29:49.968290 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:29:49.970601 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:29:50.000180 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 21:29:50.007299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:29:50.012108 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:29:50.075286 kernel: EXT4-fs (vda9): mounted filesystem 09d5f77d-9531-4ec2-9062-5fa777d03891 r/w with ordered data mode. Quota mode: none. Sep 9 21:29:50.075834 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:29:50.076905 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:29:50.079035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:29:50.082073 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:29:50.082934 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:29:50.082971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:29:50.082993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:29:50.095584 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:29:50.097398 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:29:50.104119 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 9 21:29:50.104150 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:29:50.104161 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:29:50.109057 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:29:50.109095 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:29:50.110127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:29:50.134144 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:29:50.138438 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:29:50.142744 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:29:50.146671 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:29:50.211456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:29:50.214387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:29:50.215757 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:29:50.232298 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:29:50.245621 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:29:50.254002 ignition[953]: INFO : Ignition 2.22.0 Sep 9 21:29:50.254002 ignition[953]: INFO : Stage: mount Sep 9 21:29:50.256392 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:50.256392 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:50.256392 ignition[953]: INFO : mount: mount passed Sep 9 21:29:50.256392 ignition[953]: INFO : Ignition finished successfully Sep 9 21:29:50.258657 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:29:50.261388 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:29:50.787329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:29:50.788784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:29:50.805301 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 9 21:29:50.807298 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:29:50.807326 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:29:50.809294 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:29:50.809321 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:29:50.810413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:29:50.846359 ignition[982]: INFO : Ignition 2.22.0 Sep 9 21:29:50.846359 ignition[982]: INFO : Stage: files Sep 9 21:29:50.847693 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:50.847693 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:50.847693 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:29:50.850245 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:29:50.850245 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:29:50.850245 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:29:50.850245 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:29:50.850245 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:29:50.849893 unknown[982]: wrote ssh authorized keys file for user: core Sep 9 21:29:50.855991 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 21:29:50.855991 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 21:29:50.918571 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:29:50.919625 systemd-networkd[805]: eth0: Gained IPv6LL Sep 9 21:29:51.189174 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 21:29:51.189174 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:29:51.193013 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:29:51.205131 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 21:29:51.782069 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 21:29:52.914790 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 21:29:52.914790 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 21:29:52.918924 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:29:52.933555 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:29:52.936929 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:29:52.939348 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:29:52.939348 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:29:52.939348 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:29:52.939348 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:29:52.939348 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:29:52.939348 ignition[982]: INFO : files: files passed Sep 9 21:29:52.939348 ignition[982]: INFO : Ignition finished successfully Sep 9 21:29:52.940889 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:29:52.942896 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:29:52.944936 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:29:52.961036 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:29:52.961141 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:29:52.963440 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 21:29:52.964432 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:29:52.964432 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:29:52.967334 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:29:52.966113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:29:52.969674 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:29:52.971794 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:29:53.000961 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:29:53.001395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:29:53.002997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:29:53.004541 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:29:53.006112 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:29:53.006827 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:29:53.020089 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:29:53.022371 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:29:53.050881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:29:53.052110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:29:53.053966 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:29:53.055487 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:29:53.055597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:29:53.057962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:29:53.059691 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:29:53.061198 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:29:53.062871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:29:53.064643 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:29:53.066386 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:29:53.068248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:29:53.069941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:29:53.071677 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:29:53.073403 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:29:53.074999 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:29:53.076352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:29:53.076463 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:29:53.078665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:29:53.080403 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:29:53.082290 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:29:53.083847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:29:53.086014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:29:53.086139 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:29:53.088421 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:29:53.088537 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:29:53.090450 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:29:53.092242 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:29:53.093519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:29:53.095750 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:29:53.096700 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:29:53.098075 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:29:53.098156 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:29:53.099563 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:29:53.099643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:29:53.100993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:29:53.101103 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:29:53.102414 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:29:53.102512 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:29:53.104769 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:29:53.106812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:29:53.107955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:29:53.108073 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:29:53.109683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:29:53.109785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:29:53.114428 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:29:53.116400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:29:53.125660 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:29:53.133574 ignition[1037]: INFO : Ignition 2.22.0 Sep 9 21:29:53.133574 ignition[1037]: INFO : Stage: umount Sep 9 21:29:53.135916 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:29:53.135916 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:29:53.135916 ignition[1037]: INFO : umount: umount passed Sep 9 21:29:53.135916 ignition[1037]: INFO : Ignition finished successfully Sep 9 21:29:53.137910 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:29:53.138850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:29:53.140409 systemd[1]: Stopped target network.target - Network. Sep 9 21:29:53.141792 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:29:53.141844 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:29:53.143836 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:29:53.143877 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:29:53.145238 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:29:53.145317 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:29:53.148923 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:29:53.148968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:29:53.150391 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:29:53.151805 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:29:53.156153 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:29:53.156248 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:29:53.159166 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:29:53.159375 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:29:53.159458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:29:53.163569 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:29:53.164470 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:29:53.166160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:29:53.166203 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:29:53.168796 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:29:53.169565 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:29:53.169625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:29:53.171220 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:29:53.171257 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:29:53.173700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:29:53.173736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:29:53.174674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:29:53.174712 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:29:53.177182 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:29:53.180451 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:29:53.180513 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:29:53.193999 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:29:53.197504 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:29:53.198799 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:29:53.198882 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:29:53.200367 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:29:53.200448 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:29:53.202383 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:29:53.202432 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:29:53.203922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:29:53.203955 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:29:53.205259 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:29:53.205319 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:29:53.207556 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:29:53.207597 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:29:53.209861 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:29:53.209908 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:29:53.212486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:29:53.212537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:29:53.214748 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:29:53.215727 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:29:53.215784 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:29:53.219387 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:29:53.219427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:29:53.221398 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 21:29:53.221437 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:29:53.224169 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:29:53.224230 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:29:53.225843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:29:53.225879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:29:53.229232 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:29:53.229310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 21:29:53.229344 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:29:53.229374 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:29:53.235585 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:29:53.235656 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:29:53.237575 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:29:53.240334 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:29:53.262118 systemd[1]: Switching root. Sep 9 21:29:53.294418 systemd-journald[245]: Journal stopped Sep 9 21:29:54.014604 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 9 21:29:54.014676 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:29:54.014695 kernel: SELinux: policy capability open_perms=1 Sep 9 21:29:54.014705 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:29:54.014719 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:29:54.014730 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:29:54.014739 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:29:54.014748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:29:54.014757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:29:54.014766 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:29:54.014778 systemd[1]: Successfully loaded SELinux policy in 59.146ms. Sep 9 21:29:54.014794 kernel: audit: type=1403 audit(1757453393.468:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:29:54.014807 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.427ms. Sep 9 21:29:54.014818 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:29:54.014829 systemd[1]: Detected virtualization kvm. Sep 9 21:29:54.014839 systemd[1]: Detected architecture arm64. Sep 9 21:29:54.014848 systemd[1]: Detected first boot. Sep 9 21:29:54.014858 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:29:54.014869 zram_generator::config[1082]: No configuration found. Sep 9 21:29:54.014882 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:29:54.014892 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:29:54.014907 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:29:54.014917 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:29:54.014927 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:29:54.014937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:29:54.014947 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:29:54.014957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:29:54.014968 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:29:54.014978 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:29:54.014988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:29:54.014998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:29:54.015009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:29:54.015019 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:29:54.015029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:29:54.015038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:29:54.015049 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:29:54.015060 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:29:54.015071 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:29:54.015081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:29:54.015091 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 21:29:54.015101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:29:54.015111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:29:54.015125 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:29:54.015137 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:29:54.015147 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:29:54.015157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:29:54.015167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:29:54.015177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:29:54.015187 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:29:54.015197 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:29:54.015208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:29:54.015218 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:29:54.015230 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:29:54.015240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:29:54.015250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:29:54.015260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:29:54.015284 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:29:54.015295 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:29:54.015312 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:29:54.015323 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:29:54.015332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:29:54.015344 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:29:54.015354 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:29:54.015365 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:29:54.015375 systemd[1]: Reached target machines.target - Containers. Sep 9 21:29:54.015385 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:29:54.015395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:29:54.015405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:29:54.015415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:29:54.015425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:29:54.015437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:29:54.015447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:29:54.015457 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:29:54.015467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:29:54.015478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:29:54.015489 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:29:54.015499 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:29:54.015508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:29:54.015519 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:29:54.015530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:29:54.015540 kernel: fuse: init (API version 7.41) Sep 9 21:29:54.015549 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:29:54.015559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:29:54.015568 kernel: loop: module loaded Sep 9 21:29:54.015578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:29:54.015588 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:29:54.015598 kernel: ACPI: bus type drm_connector registered Sep 9 21:29:54.015609 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:29:54.015619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:29:54.015629 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:29:54.015639 systemd[1]: Stopped verity-setup.service. Sep 9 21:29:54.015649 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:29:54.015661 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:29:54.015671 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:29:54.015682 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:29:54.015692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:29:54.015702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:29:54.015738 systemd-journald[1154]: Collecting audit messages is disabled. Sep 9 21:29:54.015760 systemd-journald[1154]: Journal started Sep 9 21:29:54.015782 systemd-journald[1154]: Runtime Journal (/run/log/journal/76a50685f9cd45b397d58e015b74c999) is 6M, max 48.5M, 42.4M free. Sep 9 21:29:53.819868 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:29:53.838210 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 21:29:53.838572 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:29:54.017862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:29:54.019388 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:29:54.020113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:29:54.021373 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:29:54.021568 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:29:54.022660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:29:54.022838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:29:54.023941 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:29:54.024094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:29:54.025180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:29:54.025368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:29:54.026494 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:29:54.026641 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:29:54.027759 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:29:54.027903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:29:54.029136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:29:54.030363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:29:54.031494 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:29:54.032819 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:29:54.044153 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:29:54.046167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:29:54.047995 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:29:54.048952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:29:54.049000 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:29:54.050600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:29:54.062063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:29:54.063235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:29:54.064314 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:29:54.065974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:29:54.067057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:29:54.068540 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:29:54.069352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:29:54.071573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:29:54.077843 systemd-journald[1154]: Time spent on flushing to /var/log/journal/76a50685f9cd45b397d58e015b74c999 is 15.546ms for 889 entries. Sep 9 21:29:54.077843 systemd-journald[1154]: System Journal (/var/log/journal/76a50685f9cd45b397d58e015b74c999) is 8M, max 195.6M, 187.6M free. Sep 9 21:29:54.096781 systemd-journald[1154]: Received client request to flush runtime journal. Sep 9 21:29:54.076424 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:29:54.087555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:29:54.091303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:29:54.092597 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:29:54.095577 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:29:54.098988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:29:54.100571 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:29:54.102991 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:29:54.106634 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:29:54.109442 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 21:29:54.112832 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 9 21:29:54.112851 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 9 21:29:54.121289 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:29:54.122532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:29:54.124507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:29:54.128378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:29:54.142521 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:29:54.145288 kernel: loop1: detected capacity change from 0 to 119368 Sep 9 21:29:54.162017 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:29:54.164098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:29:54.188309 kernel: loop2: detected capacity change from 0 to 100632 Sep 9 21:29:54.190103 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Sep 9 21:29:54.190377 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Sep 9 21:29:54.193290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:29:54.222291 kernel: loop3: detected capacity change from 0 to 211168 Sep 9 21:29:54.229291 kernel: loop4: detected capacity change from 0 to 119368 Sep 9 21:29:54.236230 kernel: loop5: detected capacity change from 0 to 100632 Sep 9 21:29:54.239372 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 21:29:54.239737 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 9 21:29:54.247183 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:29:54.247200 systemd[1]: Reloading... Sep 9 21:29:54.306302 zram_generator::config[1249]: No configuration found. Sep 9 21:29:54.328898 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:29:54.444698 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:29:54.444829 systemd[1]: Reloading finished in 197 ms. Sep 9 21:29:54.467302 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:29:54.468502 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:29:54.476505 systemd[1]: Starting ensure-sysext.service... Sep 9 21:29:54.478059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:29:54.488954 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:29:54.488969 systemd[1]: Reloading... Sep 9 21:29:54.491205 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:29:54.491549 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:29:54.491840 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:29:54.492102 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:29:54.492853 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:29:54.493173 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 9 21:29:54.493351 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 9 21:29:54.498604 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:29:54.498706 systemd-tmpfiles[1286]: Skipping /boot Sep 9 21:29:54.504511 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:29:54.504600 systemd-tmpfiles[1286]: Skipping /boot Sep 9 21:29:54.534340 zram_generator::config[1313]: No configuration found. Sep 9 21:29:54.660725 systemd[1]: Reloading finished in 171 ms. Sep 9 21:29:54.671780 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:29:54.678300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:29:54.684023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:29:54.686201 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:29:54.697010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:29:54.699708 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:29:54.703532 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:29:54.709501 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:29:54.715323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:29:54.716940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:29:54.723352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:29:54.725926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:29:54.727257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:29:54.727400 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:29:54.728380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:29:54.732776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:29:54.732926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:29:54.737552 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 9 21:29:54.740582 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:29:54.742911 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:29:54.743352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:29:54.744694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:29:54.744836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:29:54.750080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:29:54.750887 augenrules[1382]: No rules Sep 9 21:29:54.751493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:29:54.753332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:29:54.755161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:29:54.756730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:29:54.756841 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:29:54.766129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:29:54.769732 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:29:54.770576 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:29:54.771523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:29:54.774116 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:29:54.775349 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:29:54.776718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:29:54.779808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:29:54.781357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:29:54.785506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:29:54.785688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:29:54.801089 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:29:54.801243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:29:54.805689 systemd[1]: Finished ensure-sysext.service. Sep 9 21:29:54.807031 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:29:54.814827 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:29:54.815836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:29:54.817450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:29:54.819087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:29:54.823442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:29:54.824985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:29:54.825019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:29:54.829546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:29:54.832851 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:29:54.834452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:29:54.834864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:29:54.835033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:29:54.838073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:29:54.838236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:29:54.842172 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 21:29:54.845789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:29:54.845844 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:29:54.859771 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:29:54.859969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:29:54.868340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:29:54.872055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:29:54.873947 augenrules[1429]: /sbin/augenrules: No change Sep 9 21:29:54.883611 augenrules[1459]: No rules Sep 9 21:29:54.885091 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:29:54.885743 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:29:54.893210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:29:54.896364 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:29:54.955058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:29:54.956197 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:29:54.956708 systemd-resolved[1353]: Positive Trust Anchors: Sep 9 21:29:54.956727 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:29:54.956757 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:29:54.960723 systemd-networkd[1435]: lo: Link UP Sep 9 21:29:54.960731 systemd-networkd[1435]: lo: Gained carrier Sep 9 21:29:54.961603 systemd-networkd[1435]: Enumeration completed Sep 9 21:29:54.962041 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:29:54.963119 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 9 21:29:54.964574 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:29:54.965531 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:29:54.965542 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:29:54.967100 systemd-networkd[1435]: eth0: Link UP Sep 9 21:29:54.967214 systemd-networkd[1435]: eth0: Gained carrier Sep 9 21:29:54.967231 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:29:54.967826 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:29:54.969431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:29:54.970425 systemd[1]: Reached target network.target - Network. Sep 9 21:29:54.971388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:29:54.972289 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:29:54.973169 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:29:54.974251 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:29:54.975258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:29:54.977465 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:29:54.978412 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:29:54.979276 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:29:54.979318 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:29:54.980017 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:29:54.981478 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:29:54.987242 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:29:54.990599 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:29:54.992477 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:29:54.993497 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:29:54.996309 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:29:54.997395 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Sep 9 21:29:54.998503 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 21:29:54.998558 systemd-timesyncd[1436]: Initial clock synchronization to Tue 2025-09-09 21:29:55.310490 UTC. Sep 9 21:29:54.998682 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:29:54.999782 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:29:55.001550 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:29:55.002647 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:29:55.003487 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:29:55.004223 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:29:55.004253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:29:55.005102 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:29:55.007608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:29:55.010551 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:29:55.013584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:29:55.018244 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:29:55.019125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:29:55.023300 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:29:55.027210 jq[1497]: false Sep 9 21:29:55.028116 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:29:55.031057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:29:55.033289 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:29:55.037582 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:29:55.039261 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:29:55.039656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:29:55.040208 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:29:55.043465 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:29:55.045134 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:29:55.051685 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:29:55.053363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:29:55.054074 extend-filesystems[1498]: Found /dev/vda6 Sep 9 21:29:55.054331 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:29:55.055088 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:29:55.055268 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:29:55.059545 extend-filesystems[1498]: Found /dev/vda9 Sep 9 21:29:55.066889 extend-filesystems[1498]: Checking size of /dev/vda9 Sep 9 21:29:55.068228 update_engine[1505]: I20250909 21:29:55.067925 1505 main.cc:92] Flatcar Update Engine starting Sep 9 21:29:55.068670 jq[1507]: true Sep 9 21:29:55.069597 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:29:55.069810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:29:55.076508 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:29:55.084245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:29:55.090860 tar[1517]: linux-arm64/LICENSE Sep 9 21:29:55.091559 tar[1517]: linux-arm64/helm Sep 9 21:29:55.091759 extend-filesystems[1498]: Resized partition /dev/vda9 Sep 9 21:29:55.095123 extend-filesystems[1539]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 21:29:55.100099 jq[1531]: true Sep 9 21:29:55.105544 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 21:29:55.119149 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 21:29:55.119458 systemd-logind[1504]: New seat seat0. Sep 9 21:29:55.120327 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:29:55.120588 dbus-daemon[1495]: [system] SELinux support is enabled Sep 9 21:29:55.121955 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:29:55.130130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:29:55.130163 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:29:55.133477 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:29:55.133501 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:29:55.140871 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 21:29:55.140466 dbus-daemon[1495]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 21:29:55.141095 update_engine[1505]: I20250909 21:29:55.141010 1505 update_check_scheduler.cc:74] Next update check in 6m19s Sep 9 21:29:55.141372 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:29:55.144169 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:29:55.156639 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 21:29:55.156639 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 21:29:55.156639 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 21:29:55.156896 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:29:55.163679 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Sep 9 21:29:55.157125 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:29:55.169338 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:29:55.180384 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:29:55.183653 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:29:55.189253 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:29:55.230747 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:29:55.274342 containerd[1526]: time="2025-09-09T21:29:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:29:55.274706 containerd[1526]: time="2025-09-09T21:29:55.274665673Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285063486Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.934µs" Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285096402Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285115894Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285249801Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285265719Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:29:55.285311 containerd[1526]: time="2025-09-09T21:29:55.285287580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285463 containerd[1526]: time="2025-09-09T21:29:55.285356237Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285463 containerd[1526]: time="2025-09-09T21:29:55.285370950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285586 containerd[1526]: time="2025-09-09T21:29:55.285555893Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285586 containerd[1526]: time="2025-09-09T21:29:55.285580538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285631 containerd[1526]: time="2025-09-09T21:29:55.285592009Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:29:55.285631 containerd[1526]: time="2025-09-09T21:29:55.285615906Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:29:55.286138 containerd[1526]: time="2025-09-09T21:29:55.285686226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:29:55.286138 containerd[1526]: time="2025-09-09T21:29:55.285887503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:29:55.286138 containerd[1526]: time="2025-09-09T21:29:55.285918673Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:29:55.286138 containerd[1526]: time="2025-09-09T21:29:55.285928523Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:29:55.286138 containerd[1526]: time="2025-09-09T21:29:55.285964473Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:29:55.286312 containerd[1526]: time="2025-09-09T21:29:55.286184618Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:29:55.286312 containerd[1526]: time="2025-09-09T21:29:55.286246543Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:29:55.289310 containerd[1526]: time="2025-09-09T21:29:55.289262909Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:29:55.289381 containerd[1526]: time="2025-09-09T21:29:55.289352887Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:29:55.289381 containerd[1526]: time="2025-09-09T21:29:55.289379818Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289392826Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289405710Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289418012Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289428776Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289440330Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289452299Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289462357Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289471500Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:29:55.289483 containerd[1526]: time="2025-09-09T21:29:55.289483261Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:29:55.289657 containerd[1526]: time="2025-09-09T21:29:55.289589365Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:29:55.289657 containerd[1526]: time="2025-09-09T21:29:55.289614509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:29:55.289657 containerd[1526]: time="2025-09-09T21:29:55.289629263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:29:55.289657 containerd[1526]: time="2025-09-09T21:29:55.289640443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:29:55.289657 containerd[1526]: time="2025-09-09T21:29:55.289653368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:29:55.289737 containerd[1526]: time="2025-09-09T21:29:55.289667332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:29:55.289737 containerd[1526]: time="2025-09-09T21:29:55.289679218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:29:55.289737 containerd[1526]: time="2025-09-09T21:29:55.289689775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:29:55.289737 containerd[1526]: time="2025-09-09T21:29:55.289700913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:29:55.290622 containerd[1526]: time="2025-09-09T21:29:55.289958296Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:29:55.290622 containerd[1526]: time="2025-09-09T21:29:55.289990672Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:29:55.291451 containerd[1526]: time="2025-09-09T21:29:55.291419266Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:29:55.291498 containerd[1526]: time="2025-09-09T21:29:55.291466146Z" level=info msg="Start snapshots syncer" Sep 9 21:29:55.291519 containerd[1526]: time="2025-09-09T21:29:55.291494117Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:29:55.291894 containerd[1526]: time="2025-09-09T21:29:55.291788239Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:29:55.291894 containerd[1526]: time="2025-09-09T21:29:55.291850912Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:29:55.292035 containerd[1526]: time="2025-09-09T21:29:55.291933575Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292438035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292480343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292493144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292504490Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292517124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292527971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292540232Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292567412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292579839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292594593Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292635155Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292649327Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:29:55.293306 containerd[1526]: time="2025-09-09T21:29:55.292658180Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292667198Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292677007Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292689932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292701070Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292780243Z" level=info msg="runtime interface created" Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292785687Z" level=info msg="created NRI interface" Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292793874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292806176Z" level=info msg="Connect containerd service" Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.292834396Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:29:55.293567 containerd[1526]: time="2025-09-09T21:29:55.293522843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:29:55.358833 containerd[1526]: time="2025-09-09T21:29:55.358768449Z" level=info msg="Start subscribing containerd event" Sep 9 21:29:55.359003 containerd[1526]: time="2025-09-09T21:29:55.358970390Z" level=info msg="Start recovering state" Sep 9 21:29:55.359116 containerd[1526]: time="2025-09-09T21:29:55.359062820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:29:55.359148 containerd[1526]: time="2025-09-09T21:29:55.359120382Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:29:55.359228 containerd[1526]: time="2025-09-09T21:29:55.359212895Z" level=info msg="Start event monitor" Sep 9 21:29:55.359326 containerd[1526]: time="2025-09-09T21:29:55.359313928Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:29:55.359405 containerd[1526]: time="2025-09-09T21:29:55.359394223Z" level=info msg="Start streaming server" Sep 9 21:29:55.359535 containerd[1526]: time="2025-09-09T21:29:55.359444303Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:29:55.359535 containerd[1526]: time="2025-09-09T21:29:55.359485032Z" level=info msg="runtime interface starting up..." Sep 9 21:29:55.359535 containerd[1526]: time="2025-09-09T21:29:55.359492014Z" level=info msg="starting plugins..." Sep 9 21:29:55.359535 containerd[1526]: time="2025-09-09T21:29:55.359509968Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:29:55.359912 containerd[1526]: time="2025-09-09T21:29:55.359881102Z" level=info msg="containerd successfully booted in 0.086341s" Sep 9 21:29:55.359979 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:29:55.452558 tar[1517]: linux-arm64/README.md Sep 9 21:29:55.472320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:29:55.533262 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:29:55.552417 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:29:55.556641 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:29:55.581557 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:29:55.583356 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:29:55.585469 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:29:55.601402 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:29:55.603698 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:29:55.605555 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 21:29:55.606611 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:29:56.808767 systemd-networkd[1435]: eth0: Gained IPv6LL Sep 9 21:29:56.811265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:29:56.812992 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:29:56.815403 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 21:29:56.817677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:29:56.845657 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:29:56.860073 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:29:56.861350 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 21:29:56.862672 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:29:56.869343 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:29:57.399901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:29:57.401376 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:29:57.404921 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:29:57.406392 systemd[1]: Startup finished in 1.972s (kernel) + 5.859s (initrd) + 3.999s (userspace) = 11.831s. Sep 9 21:29:57.756263 kubelet[1639]: E0909 21:29:57.756152 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:29:57.758902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:29:57.759039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:29:57.759393 systemd[1]: kubelet.service: Consumed 748ms CPU time, 258.2M memory peak. Sep 9 21:30:00.412889 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:30:00.413948 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Sep 9 21:30:00.484203 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:00.485875 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:00.491548 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:30:00.492399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:30:00.498341 systemd-logind[1504]: New session 1 of user core. Sep 9 21:30:00.511960 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:30:00.514314 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:30:00.533050 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:30:00.535205 systemd-logind[1504]: New session c1 of user core. Sep 9 21:30:00.642817 systemd[1657]: Queued start job for default target default.target. Sep 9 21:30:00.661257 systemd[1657]: Created slice app.slice - User Application Slice. Sep 9 21:30:00.661309 systemd[1657]: Reached target paths.target - Paths. Sep 9 21:30:00.661347 systemd[1657]: Reached target timers.target - Timers. Sep 9 21:30:00.662511 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:30:00.671723 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:30:00.671785 systemd[1657]: Reached target sockets.target - Sockets. Sep 9 21:30:00.671823 systemd[1657]: Reached target basic.target - Basic System. Sep 9 21:30:00.671851 systemd[1657]: Reached target default.target - Main User Target. Sep 9 21:30:00.671875 systemd[1657]: Startup finished in 131ms. Sep 9 21:30:00.672122 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:30:00.673408 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:30:00.734687 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:36608.service - OpenSSH per-connection server daemon (10.0.0.1:36608). Sep 9 21:30:00.783861 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 36608 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:00.784923 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:00.789392 systemd-logind[1504]: New session 2 of user core. Sep 9 21:30:00.800204 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:30:00.853320 sshd[1671]: Connection closed by 10.0.0.1 port 36608 Sep 9 21:30:00.853801 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:00.867242 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:36608.service: Deactivated successfully. Sep 9 21:30:00.868626 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 21:30:00.870556 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Sep 9 21:30:00.872066 systemd-logind[1504]: Removed session 2. Sep 9 21:30:00.873613 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:36622.service - OpenSSH per-connection server daemon (10.0.0.1:36622). Sep 9 21:30:00.931315 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 36622 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:00.932477 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:00.937077 systemd-logind[1504]: New session 3 of user core. Sep 9 21:30:00.952446 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:30:01.001303 sshd[1680]: Connection closed by 10.0.0.1 port 36622 Sep 9 21:30:01.001176 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:01.013212 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:36622.service: Deactivated successfully. Sep 9 21:30:01.014593 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 21:30:01.015220 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Sep 9 21:30:01.017401 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:36636.service - OpenSSH per-connection server daemon (10.0.0.1:36636). Sep 9 21:30:01.018307 systemd-logind[1504]: Removed session 3. Sep 9 21:30:01.065520 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 36636 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:01.066630 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:01.071038 systemd-logind[1504]: New session 4 of user core. Sep 9 21:30:01.081419 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:30:01.133045 sshd[1689]: Connection closed by 10.0.0.1 port 36636 Sep 9 21:30:01.133455 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:01.144714 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:36636.service: Deactivated successfully. Sep 9 21:30:01.147479 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:30:01.148055 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:30:01.150050 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:36650.service - OpenSSH per-connection server daemon (10.0.0.1:36650). Sep 9 21:30:01.150566 systemd-logind[1504]: Removed session 4. Sep 9 21:30:01.201855 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 36650 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:01.202972 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:01.206551 systemd-logind[1504]: New session 5 of user core. Sep 9 21:30:01.222477 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:30:01.278777 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:30:01.279037 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:30:01.292223 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 9 21:30:01.293597 sshd[1698]: Connection closed by 10.0.0.1 port 36650 Sep 9 21:30:01.294221 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:01.301097 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:36650.service: Deactivated successfully. Sep 9 21:30:01.302797 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:30:01.304791 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:30:01.307579 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). Sep 9 21:30:01.308025 systemd-logind[1504]: Removed session 5. Sep 9 21:30:01.353984 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:01.355113 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:01.358516 systemd-logind[1504]: New session 6 of user core. Sep 9 21:30:01.364421 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:30:01.414501 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:30:01.414781 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:30:01.418888 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 9 21:30:01.423157 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:30:01.423437 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:30:01.430941 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:30:01.469367 augenrules[1732]: No rules Sep 9 21:30:01.470532 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:30:01.470770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:30:01.471555 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 9 21:30:01.472733 sshd[1708]: Connection closed by 10.0.0.1 port 36662 Sep 9 21:30:01.473064 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:01.480102 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:36662.service: Deactivated successfully. Sep 9 21:30:01.481468 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:30:01.482419 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:30:01.484532 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:36664.service - OpenSSH per-connection server daemon (10.0.0.1:36664). Sep 9 21:30:01.484963 systemd-logind[1504]: Removed session 6. Sep 9 21:30:01.531796 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 36664 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:30:01.532786 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:30:01.536008 systemd-logind[1504]: New session 7 of user core. Sep 9 21:30:01.544416 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:30:01.594108 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:30:01.594380 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:30:01.861428 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:30:01.879569 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:30:02.081843 dockerd[1765]: time="2025-09-09T21:30:02.081777001Z" level=info msg="Starting up" Sep 9 21:30:02.082623 dockerd[1765]: time="2025-09-09T21:30:02.082602170Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:30:02.092652 dockerd[1765]: time="2025-09-09T21:30:02.092610814Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:30:02.183906 systemd[1]: var-lib-docker-metacopy\x2dcheck2221517731-merged.mount: Deactivated successfully. Sep 9 21:30:02.194380 dockerd[1765]: time="2025-09-09T21:30:02.194334386Z" level=info msg="Loading containers: start." Sep 9 21:30:02.202316 kernel: Initializing XFRM netlink socket Sep 9 21:30:02.390832 systemd-networkd[1435]: docker0: Link UP Sep 9 21:30:02.394078 dockerd[1765]: time="2025-09-09T21:30:02.394033381Z" level=info msg="Loading containers: done." Sep 9 21:30:02.405431 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck815186045-merged.mount: Deactivated successfully. Sep 9 21:30:02.407815 dockerd[1765]: time="2025-09-09T21:30:02.407758008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:30:02.407897 dockerd[1765]: time="2025-09-09T21:30:02.407838949Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:30:02.407921 dockerd[1765]: time="2025-09-09T21:30:02.407911403Z" level=info msg="Initializing buildkit" Sep 9 21:30:02.427129 dockerd[1765]: time="2025-09-09T21:30:02.427059089Z" level=info msg="Completed buildkit initialization" Sep 9 21:30:02.433597 dockerd[1765]: time="2025-09-09T21:30:02.433549126Z" level=info msg="Daemon has completed initialization" Sep 9 21:30:02.433879 dockerd[1765]: time="2025-09-09T21:30:02.433616787Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:30:02.433761 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:30:02.951039 containerd[1526]: time="2025-09-09T21:30:02.950687905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 21:30:03.561214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620601850.mount: Deactivated successfully. Sep 9 21:30:04.867038 containerd[1526]: time="2025-09-09T21:30:04.866960113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:04.867670 containerd[1526]: time="2025-09-09T21:30:04.867645619Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 21:30:04.868275 containerd[1526]: time="2025-09-09T21:30:04.868230439Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:04.871045 containerd[1526]: time="2025-09-09T21:30:04.871008928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:04.872019 containerd[1526]: time="2025-09-09T21:30:04.871982817Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.92125584s" Sep 9 21:30:04.872019 containerd[1526]: time="2025-09-09T21:30:04.872017782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 21:30:04.873614 containerd[1526]: time="2025-09-09T21:30:04.873549620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 21:30:06.285659 containerd[1526]: time="2025-09-09T21:30:06.285603216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:06.286324 containerd[1526]: time="2025-09-09T21:30:06.286248922Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 21:30:06.286982 containerd[1526]: time="2025-09-09T21:30:06.286941486Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:06.289842 containerd[1526]: time="2025-09-09T21:30:06.289798592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:06.290862 containerd[1526]: time="2025-09-09T21:30:06.290732746Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.417146006s" Sep 9 21:30:06.290862 containerd[1526]: time="2025-09-09T21:30:06.290773266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 21:30:06.291337 containerd[1526]: time="2025-09-09T21:30:06.291310931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 21:30:07.423439 containerd[1526]: time="2025-09-09T21:30:07.422212311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:07.423769 containerd[1526]: time="2025-09-09T21:30:07.423743205Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 21:30:07.424827 containerd[1526]: time="2025-09-09T21:30:07.424789522Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:07.427540 containerd[1526]: time="2025-09-09T21:30:07.427482233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:07.428577 containerd[1526]: time="2025-09-09T21:30:07.428534194Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.137190784s" Sep 9 21:30:07.428577 containerd[1526]: time="2025-09-09T21:30:07.428573783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 21:30:07.429578 containerd[1526]: time="2025-09-09T21:30:07.429550396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 21:30:07.873203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:30:07.875092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:08.039910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:08.044136 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:30:08.094370 kubelet[2055]: E0909 21:30:08.094309 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:30:08.097747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:30:08.098000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:30:08.099377 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106.7M memory peak. Sep 9 21:30:08.476749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683113657.mount: Deactivated successfully. Sep 9 21:30:08.849155 containerd[1526]: time="2025-09-09T21:30:08.849031289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:08.849867 containerd[1526]: time="2025-09-09T21:30:08.849815362Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 21:30:08.850547 containerd[1526]: time="2025-09-09T21:30:08.850514898Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:08.852364 containerd[1526]: time="2025-09-09T21:30:08.852328361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:08.853126 containerd[1526]: time="2025-09-09T21:30:08.853095317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.423510823s" Sep 9 21:30:08.853165 containerd[1526]: time="2025-09-09T21:30:08.853133659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 21:30:08.853813 containerd[1526]: time="2025-09-09T21:30:08.853774072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 21:30:09.355421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2311850550.mount: Deactivated successfully. Sep 9 21:30:10.046219 containerd[1526]: time="2025-09-09T21:30:10.046147796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:10.047615 containerd[1526]: time="2025-09-09T21:30:10.047281814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 21:30:10.049297 containerd[1526]: time="2025-09-09T21:30:10.049244088Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:10.052301 containerd[1526]: time="2025-09-09T21:30:10.052237803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:10.053337 containerd[1526]: time="2025-09-09T21:30:10.053299884Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.199489662s" Sep 9 21:30:10.053389 containerd[1526]: time="2025-09-09T21:30:10.053340215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 21:30:10.053813 containerd[1526]: time="2025-09-09T21:30:10.053754867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:30:10.497703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668673674.mount: Deactivated successfully. Sep 9 21:30:10.502474 containerd[1526]: time="2025-09-09T21:30:10.502416810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:30:10.503081 containerd[1526]: time="2025-09-09T21:30:10.503049041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 21:30:10.504188 containerd[1526]: time="2025-09-09T21:30:10.504152539Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:30:10.506504 containerd[1526]: time="2025-09-09T21:30:10.506460181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:30:10.507141 containerd[1526]: time="2025-09-09T21:30:10.507101459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 453.24289ms" Sep 9 21:30:10.507141 containerd[1526]: time="2025-09-09T21:30:10.507133185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 21:30:10.507991 containerd[1526]: time="2025-09-09T21:30:10.507945639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 21:30:10.953370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146141443.mount: Deactivated successfully. Sep 9 21:30:12.801517 containerd[1526]: time="2025-09-09T21:30:12.801452181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:12.802545 containerd[1526]: time="2025-09-09T21:30:12.802516895Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 21:30:12.803324 containerd[1526]: time="2025-09-09T21:30:12.803248029Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:12.807200 containerd[1526]: time="2025-09-09T21:30:12.807148244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:12.808717 containerd[1526]: time="2025-09-09T21:30:12.808676658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.300694758s" Sep 9 21:30:12.808757 containerd[1526]: time="2025-09-09T21:30:12.808718747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 21:30:17.029670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:17.029823 systemd[1]: kubelet.service: Consumed 151ms CPU time, 106.7M memory peak. Sep 9 21:30:17.031788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:17.051585 systemd[1]: Reload requested from client PID 2210 ('systemctl') (unit session-7.scope)... Sep 9 21:30:17.051600 systemd[1]: Reloading... Sep 9 21:30:17.117404 zram_generator::config[2254]: No configuration found. Sep 9 21:30:17.278932 systemd[1]: Reloading finished in 227 ms. Sep 9 21:30:17.335735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:17.338220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:17.339418 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:30:17.339599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:17.339637 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.1M memory peak. Sep 9 21:30:17.340953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:17.465213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:17.470430 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:30:17.503477 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:30:17.503477 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:30:17.503477 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:30:17.503829 kubelet[2303]: I0909 21:30:17.503512 2303 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:30:18.226280 kubelet[2303]: I0909 21:30:18.226226 2303 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:30:18.226280 kubelet[2303]: I0909 21:30:18.226265 2303 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:30:18.226527 kubelet[2303]: I0909 21:30:18.226498 2303 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:30:18.251657 kubelet[2303]: E0909 21:30:18.251606 2303 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 21:30:18.252594 kubelet[2303]: I0909 21:30:18.252555 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:30:18.260828 kubelet[2303]: I0909 21:30:18.260730 2303 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:30:18.263638 kubelet[2303]: I0909 21:30:18.263598 2303 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:30:18.265291 kubelet[2303]: I0909 21:30:18.264770 2303 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:30:18.265291 kubelet[2303]: I0909 21:30:18.264813 2303 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:30:18.265291 kubelet[2303]: I0909 21:30:18.265037 2303 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:30:18.265291 kubelet[2303]: I0909 21:30:18.265046 2303 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:30:18.265812 kubelet[2303]: I0909 21:30:18.265784 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:30:18.268819 kubelet[2303]: I0909 21:30:18.268675 2303 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:30:18.268819 kubelet[2303]: I0909 21:30:18.268703 2303 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:30:18.268819 kubelet[2303]: I0909 21:30:18.268732 2303 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:30:18.269769 kubelet[2303]: I0909 21:30:18.269750 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:30:18.270894 kubelet[2303]: I0909 21:30:18.270864 2303 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:30:18.271859 kubelet[2303]: I0909 21:30:18.271565 2303 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:30:18.271859 kubelet[2303]: W0909 21:30:18.271691 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:30:18.272837 kubelet[2303]: E0909 21:30:18.272803 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 21:30:18.273502 kubelet[2303]: E0909 21:30:18.273467 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 21:30:18.274465 kubelet[2303]: I0909 21:30:18.274425 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:30:18.274518 kubelet[2303]: I0909 21:30:18.274473 2303 server.go:1289] "Started kubelet" Sep 9 21:30:18.274629 kubelet[2303]: I0909 21:30:18.274593 2303 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:30:18.275670 kubelet[2303]: I0909 21:30:18.275651 2303 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:30:18.278129 kubelet[2303]: I0909 21:30:18.277965 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:30:18.278129 kubelet[2303]: E0909 21:30:18.276862 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863ba92918b52ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:30:18.274443948 +0000 UTC m=+0.800477964,LastTimestamp:2025-09-09 21:30:18.274443948 +0000 UTC m=+0.800477964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:30:18.278447 kubelet[2303]: I0909 21:30:18.278376 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:30:18.278447 kubelet[2303]: I0909 21:30:18.278425 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:30:18.278700 kubelet[2303]: I0909 21:30:18.278669 2303 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:30:18.278780 kubelet[2303]: E0909 21:30:18.278762 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:30:18.278814 kubelet[2303]: I0909 21:30:18.278791 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:30:18.279204 kubelet[2303]: I0909 21:30:18.278977 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:30:18.279204 kubelet[2303]: I0909 21:30:18.279037 2303 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:30:18.279495 kubelet[2303]: E0909 21:30:18.279469 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 21:30:18.279986 kubelet[2303]: I0909 21:30:18.279961 2303 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:30:18.280049 kubelet[2303]: I0909 21:30:18.280039 2303 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:30:18.281315 kubelet[2303]: E0909 21:30:18.281263 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Sep 9 21:30:18.281803 kubelet[2303]: E0909 21:30:18.281777 2303 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:30:18.281982 kubelet[2303]: I0909 21:30:18.281963 2303 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:30:18.293961 kubelet[2303]: I0909 21:30:18.293922 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:30:18.293961 kubelet[2303]: I0909 21:30:18.293942 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:30:18.293961 kubelet[2303]: I0909 21:30:18.293960 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:30:18.298850 kubelet[2303]: I0909 21:30:18.298802 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:30:18.300023 kubelet[2303]: I0909 21:30:18.299993 2303 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:30:18.300023 kubelet[2303]: I0909 21:30:18.300021 2303 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:30:18.300124 kubelet[2303]: I0909 21:30:18.300041 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:30:18.300124 kubelet[2303]: I0909 21:30:18.300049 2303 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:30:18.300124 kubelet[2303]: E0909 21:30:18.300090 2303 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:30:18.301387 kubelet[2303]: E0909 21:30:18.301243 2303 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 21:30:18.322518 kubelet[2303]: I0909 21:30:18.322457 2303 policy_none.go:49] "None policy: Start" Sep 9 21:30:18.322691 kubelet[2303]: I0909 21:30:18.322525 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:30:18.322691 kubelet[2303]: I0909 21:30:18.322591 2303 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:30:18.328970 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:30:18.338945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:30:18.341695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:30:18.350227 kubelet[2303]: E0909 21:30:18.350191 2303 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:30:18.350453 kubelet[2303]: I0909 21:30:18.350436 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:30:18.350503 kubelet[2303]: I0909 21:30:18.350450 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:30:18.351027 kubelet[2303]: I0909 21:30:18.350981 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:30:18.352355 kubelet[2303]: E0909 21:30:18.352325 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:30:18.352477 kubelet[2303]: E0909 21:30:18.352463 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:30:18.409574 systemd[1]: Created slice kubepods-burstable-pod32fe9aa4aa1612dff4109aa3d8f73c12.slice - libcontainer container kubepods-burstable-pod32fe9aa4aa1612dff4109aa3d8f73c12.slice. Sep 9 21:30:18.430754 kubelet[2303]: E0909 21:30:18.430701 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:18.433380 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 21:30:18.440313 kubelet[2303]: E0909 21:30:18.440266 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:18.442517 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 21:30:18.444027 kubelet[2303]: E0909 21:30:18.444001 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:18.452140 kubelet[2303]: I0909 21:30:18.452063 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:30:18.452584 kubelet[2303]: E0909 21:30:18.452545 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 9 21:30:18.480992 kubelet[2303]: I0909 21:30:18.480904 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:18.481299 kubelet[2303]: I0909 21:30:18.481097 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:18.481299 kubelet[2303]: I0909 21:30:18.481127 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:18.481299 kubelet[2303]: I0909 21:30:18.481143 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:18.481299 kubelet[2303]: I0909 21:30:18.481159 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:18.481299 kubelet[2303]: I0909 21:30:18.481186 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:30:18.481436 kubelet[2303]: I0909 21:30:18.481204 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:18.481436 kubelet[2303]: I0909 21:30:18.481221 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:18.481436 kubelet[2303]: I0909 21:30:18.481235 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:18.481955 kubelet[2303]: E0909 21:30:18.481887 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Sep 9 21:30:18.654259 kubelet[2303]: I0909 21:30:18.654228 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:30:18.654600 kubelet[2303]: E0909 21:30:18.654581 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 9 21:30:18.731530 kubelet[2303]: E0909 21:30:18.731426 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.732141 containerd[1526]: time="2025-09-09T21:30:18.732057854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32fe9aa4aa1612dff4109aa3d8f73c12,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:18.741368 kubelet[2303]: E0909 21:30:18.741335 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.741818 containerd[1526]: time="2025-09-09T21:30:18.741778723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:18.745457 kubelet[2303]: E0909 21:30:18.745147 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.746050 containerd[1526]: time="2025-09-09T21:30:18.745761834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:18.755398 containerd[1526]: time="2025-09-09T21:30:18.755351827Z" level=info msg="connecting to shim 509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58" address="unix:///run/containerd/s/dd3cdd0477f4c8a671be6d44c02b47dfcce182802a8c92cab8e8680546a0c81b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:18.770547 containerd[1526]: time="2025-09-09T21:30:18.770502740Z" level=info msg="connecting to shim 612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec" address="unix:///run/containerd/s/f028a4addc9a187809c3036bb63dc373e38479f6a82faff53234277e7a2c4238" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:18.782922 containerd[1526]: time="2025-09-09T21:30:18.782878642Z" level=info msg="connecting to shim 6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626" address="unix:///run/containerd/s/2fc44375bf1efc47b26673dc3a27529806ad86318f3a8978307e6121083866c3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:18.792484 systemd[1]: Started cri-containerd-509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58.scope - libcontainer container 509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58. Sep 9 21:30:18.798005 systemd[1]: Started cri-containerd-612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec.scope - libcontainer container 612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec. Sep 9 21:30:18.814475 systemd[1]: Started cri-containerd-6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626.scope - libcontainer container 6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626. Sep 9 21:30:18.838164 containerd[1526]: time="2025-09-09T21:30:18.838119971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32fe9aa4aa1612dff4109aa3d8f73c12,Namespace:kube-system,Attempt:0,} returns sandbox id \"509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58\"" Sep 9 21:30:18.840188 kubelet[2303]: E0909 21:30:18.840153 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.845648 containerd[1526]: time="2025-09-09T21:30:18.845595668Z" level=info msg="CreateContainer within sandbox \"509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:30:18.846297 containerd[1526]: time="2025-09-09T21:30:18.846172910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec\"" Sep 9 21:30:18.849108 kubelet[2303]: E0909 21:30:18.849083 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.853401 containerd[1526]: time="2025-09-09T21:30:18.853362489Z" level=info msg="CreateContainer within sandbox \"612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:30:18.855660 containerd[1526]: time="2025-09-09T21:30:18.855624252Z" level=info msg="Container b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:18.862025 containerd[1526]: time="2025-09-09T21:30:18.861952758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626\"" Sep 9 21:30:18.863407 kubelet[2303]: E0909 21:30:18.863381 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:18.865450 containerd[1526]: time="2025-09-09T21:30:18.865394491Z" level=info msg="Container 2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:18.865866 containerd[1526]: time="2025-09-09T21:30:18.865833043Z" level=info msg="CreateContainer within sandbox \"509bf7ae9e09cf00015affc2c6ff7920d2ecf1261127a99181933802b8e1fa58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8\"" Sep 9 21:30:18.866507 containerd[1526]: time="2025-09-09T21:30:18.866474481Z" level=info msg="StartContainer for \"b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8\"" Sep 9 21:30:18.867011 containerd[1526]: time="2025-09-09T21:30:18.866480532Z" level=info msg="CreateContainer within sandbox \"6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:30:18.867884 containerd[1526]: time="2025-09-09T21:30:18.867838944Z" level=info msg="connecting to shim b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8" address="unix:///run/containerd/s/dd3cdd0477f4c8a671be6d44c02b47dfcce182802a8c92cab8e8680546a0c81b" protocol=ttrpc version=3 Sep 9 21:30:18.877122 containerd[1526]: time="2025-09-09T21:30:18.877077342Z" level=info msg="CreateContainer within sandbox \"612316892bfca2e71ea13d70abe998ea9d4068c0c740a3d61d3bae568a65a7ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986\"" Sep 9 21:30:18.877617 containerd[1526]: time="2025-09-09T21:30:18.877581212Z" level=info msg="StartContainer for \"2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986\"" Sep 9 21:30:18.878685 containerd[1526]: time="2025-09-09T21:30:18.878647096Z" level=info msg="connecting to shim 2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986" address="unix:///run/containerd/s/f028a4addc9a187809c3036bb63dc373e38479f6a82faff53234277e7a2c4238" protocol=ttrpc version=3 Sep 9 21:30:18.878789 containerd[1526]: time="2025-09-09T21:30:18.878766712Z" level=info msg="Container 987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:18.883684 kubelet[2303]: E0909 21:30:18.883646 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Sep 9 21:30:18.887102 containerd[1526]: time="2025-09-09T21:30:18.887060485Z" level=info msg="CreateContainer within sandbox \"6acf2cad6f3042684bb6bacd7fdf902ac796a3eb4f940309d95febdc808c7626\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852\"" Sep 9 21:30:18.889257 containerd[1526]: time="2025-09-09T21:30:18.889217580Z" level=info msg="StartContainer for \"987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852\"" Sep 9 21:30:18.892058 systemd[1]: Started cri-containerd-b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8.scope - libcontainer container b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8. Sep 9 21:30:18.894155 containerd[1526]: time="2025-09-09T21:30:18.894104562Z" level=info msg="connecting to shim 987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852" address="unix:///run/containerd/s/2fc44375bf1efc47b26673dc3a27529806ad86318f3a8978307e6121083866c3" protocol=ttrpc version=3 Sep 9 21:30:18.906481 systemd[1]: Started cri-containerd-2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986.scope - libcontainer container 2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986. Sep 9 21:30:18.915442 systemd[1]: Started cri-containerd-987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852.scope - libcontainer container 987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852. Sep 9 21:30:18.954159 containerd[1526]: time="2025-09-09T21:30:18.954110212Z" level=info msg="StartContainer for \"b3c51c140876ba113750f2bf5379f5fc8fa6bf2968b8ca8b82f27120ffe73ec8\" returns successfully" Sep 9 21:30:18.961951 containerd[1526]: time="2025-09-09T21:30:18.961882684Z" level=info msg="StartContainer for \"2018b1b065954ab05b0936295ff8989391c5310197970726259c556630746986\" returns successfully" Sep 9 21:30:18.962227 containerd[1526]: time="2025-09-09T21:30:18.962192844Z" level=info msg="StartContainer for \"987e5cd574458c512438d93a6b5dc11c85fa096d5b2e583c9697312e4c1b3852\" returns successfully" Sep 9 21:30:19.057678 kubelet[2303]: I0909 21:30:19.057546 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:30:19.309051 kubelet[2303]: E0909 21:30:19.308785 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:19.309051 kubelet[2303]: E0909 21:30:19.308914 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:19.311939 kubelet[2303]: E0909 21:30:19.311899 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:19.312080 kubelet[2303]: E0909 21:30:19.312013 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:19.314284 kubelet[2303]: E0909 21:30:19.313855 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:19.314284 kubelet[2303]: E0909 21:30:19.313964 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:20.317988 kubelet[2303]: E0909 21:30:20.317943 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:20.318827 kubelet[2303]: E0909 21:30:20.318073 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:20.318827 kubelet[2303]: E0909 21:30:20.318442 2303 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 21:30:20.318827 kubelet[2303]: E0909 21:30:20.318561 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:20.666942 kubelet[2303]: E0909 21:30:20.666872 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 21:30:20.745302 kubelet[2303]: I0909 21:30:20.744018 2303 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:30:20.780433 kubelet[2303]: I0909 21:30:20.780388 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:20.785151 kubelet[2303]: E0909 21:30:20.785114 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:20.785151 kubelet[2303]: I0909 21:30:20.785147 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:20.787009 kubelet[2303]: E0909 21:30:20.786974 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:20.787009 kubelet[2303]: I0909 21:30:20.787008 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:30:20.789001 kubelet[2303]: E0909 21:30:20.788968 2303 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 21:30:21.271514 kubelet[2303]: I0909 21:30:21.271466 2303 apiserver.go:52] "Watching apiserver" Sep 9 21:30:21.279405 kubelet[2303]: I0909 21:30:21.279352 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:30:22.001081 kubelet[2303]: I0909 21:30:22.000566 2303 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:22.005600 kubelet[2303]: E0909 21:30:22.005505 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:22.320870 kubelet[2303]: E0909 21:30:22.320752 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:22.910310 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-7.scope)... Sep 9 21:30:22.910325 systemd[1]: Reloading... Sep 9 21:30:22.976314 zram_generator::config[2634]: No configuration found. Sep 9 21:30:23.145850 systemd[1]: Reloading finished in 235 ms. Sep 9 21:30:23.173263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:23.190405 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:30:23.190706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:23.190772 systemd[1]: kubelet.service: Consumed 1.184s CPU time, 127.9M memory peak. Sep 9 21:30:23.192670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:30:23.360308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:30:23.371639 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:30:23.422338 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:30:23.422338 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 21:30:23.422338 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:30:23.422338 kubelet[2677]: I0909 21:30:23.421834 2677 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:30:23.427686 kubelet[2677]: I0909 21:30:23.427342 2677 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 21:30:23.427686 kubelet[2677]: I0909 21:30:23.427373 2677 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:30:23.427686 kubelet[2677]: I0909 21:30:23.427585 2677 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 21:30:23.428908 kubelet[2677]: I0909 21:30:23.428878 2677 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 21:30:23.432068 kubelet[2677]: I0909 21:30:23.431242 2677 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:30:23.438060 kubelet[2677]: I0909 21:30:23.438014 2677 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:30:23.440855 kubelet[2677]: I0909 21:30:23.440822 2677 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:30:23.441097 kubelet[2677]: I0909 21:30:23.441064 2677 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:30:23.441260 kubelet[2677]: I0909 21:30:23.441096 2677 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:30:23.441351 kubelet[2677]: I0909 21:30:23.441288 2677 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:30:23.441351 kubelet[2677]: I0909 21:30:23.441298 2677 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 21:30:23.441351 kubelet[2677]: I0909 21:30:23.441344 2677 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:30:23.441508 kubelet[2677]: I0909 21:30:23.441495 2677 kubelet.go:480] "Attempting to sync node with API server" Sep 9 21:30:23.441532 kubelet[2677]: I0909 21:30:23.441510 2677 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:30:23.441552 kubelet[2677]: I0909 21:30:23.441535 2677 kubelet.go:386] "Adding apiserver pod source" Sep 9 21:30:23.441552 kubelet[2677]: I0909 21:30:23.441547 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:30:23.442562 kubelet[2677]: I0909 21:30:23.442469 2677 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:30:23.443099 kubelet[2677]: I0909 21:30:23.443075 2677 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 21:30:23.446374 kubelet[2677]: I0909 21:30:23.446351 2677 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 21:30:23.446465 kubelet[2677]: I0909 21:30:23.446400 2677 server.go:1289] "Started kubelet" Sep 9 21:30:23.446592 kubelet[2677]: I0909 21:30:23.446534 2677 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:30:23.446810 kubelet[2677]: I0909 21:30:23.446650 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:30:23.447160 kubelet[2677]: I0909 21:30:23.447105 2677 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:30:23.449332 kubelet[2677]: I0909 21:30:23.449306 2677 server.go:317] "Adding debug handlers to kubelet server" Sep 9 21:30:23.453336 kubelet[2677]: I0909 21:30:23.452946 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:30:23.457718 kubelet[2677]: I0909 21:30:23.453638 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:30:23.457718 kubelet[2677]: E0909 21:30:23.457600 2677 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:30:23.458234 kubelet[2677]: I0909 21:30:23.458210 2677 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 21:30:23.459046 kubelet[2677]: I0909 21:30:23.459019 2677 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 21:30:23.459281 kubelet[2677]: I0909 21:30:23.459250 2677 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:30:23.461096 kubelet[2677]: I0909 21:30:23.461033 2677 factory.go:223] Registration of the systemd container factory successfully Sep 9 21:30:23.461395 kubelet[2677]: I0909 21:30:23.461143 2677 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:30:23.463189 kubelet[2677]: I0909 21:30:23.463164 2677 factory.go:223] Registration of the containerd container factory successfully Sep 9 21:30:23.465715 kubelet[2677]: E0909 21:30:23.465677 2677 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:30:23.483293 kubelet[2677]: I0909 21:30:23.483220 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 21:30:23.485552 kubelet[2677]: I0909 21:30:23.485463 2677 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 21:30:23.485552 kubelet[2677]: I0909 21:30:23.485491 2677 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 21:30:23.486254 kubelet[2677]: I0909 21:30:23.486023 2677 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 21:30:23.486254 kubelet[2677]: I0909 21:30:23.486043 2677 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 21:30:23.487005 kubelet[2677]: E0909 21:30:23.486376 2677 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:30:23.506501 kubelet[2677]: I0909 21:30:23.506477 2677 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 21:30:23.506501 kubelet[2677]: I0909 21:30:23.506492 2677 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 21:30:23.506654 kubelet[2677]: I0909 21:30:23.506513 2677 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:30:23.506654 kubelet[2677]: I0909 21:30:23.506643 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:30:23.506697 kubelet[2677]: I0909 21:30:23.506653 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:30:23.506697 kubelet[2677]: I0909 21:30:23.506669 2677 policy_none.go:49] "None policy: Start" Sep 9 21:30:23.506697 kubelet[2677]: I0909 21:30:23.506679 2677 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 21:30:23.506697 kubelet[2677]: I0909 21:30:23.506688 2677 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:30:23.506794 kubelet[2677]: I0909 21:30:23.506775 2677 state_mem.go:75] "Updated machine memory state" Sep 9 21:30:23.515306 kubelet[2677]: E0909 21:30:23.515208 2677 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 21:30:23.515444 kubelet[2677]: I0909 21:30:23.515424 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:30:23.515476 kubelet[2677]: I0909 21:30:23.515442 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:30:23.515701 kubelet[2677]: I0909 21:30:23.515682 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:30:23.516860 kubelet[2677]: E0909 21:30:23.516778 2677 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 21:30:23.587376 kubelet[2677]: I0909 21:30:23.587342 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:23.587618 kubelet[2677]: I0909 21:30:23.587342 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.587697 kubelet[2677]: I0909 21:30:23.587452 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 21:30:23.594582 kubelet[2677]: E0909 21:30:23.594470 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:23.619692 kubelet[2677]: I0909 21:30:23.619666 2677 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 21:30:23.627596 kubelet[2677]: I0909 21:30:23.627549 2677 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 21:30:23.627732 kubelet[2677]: I0909 21:30:23.627667 2677 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 21:30:23.660957 kubelet[2677]: I0909 21:30:23.660907 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:23.660957 kubelet[2677]: I0909 21:30:23.660953 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:23.661113 kubelet[2677]: I0909 21:30:23.660976 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.661113 kubelet[2677]: I0909 21:30:23.660993 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.661113 kubelet[2677]: I0909 21:30:23.661008 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32fe9aa4aa1612dff4109aa3d8f73c12-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32fe9aa4aa1612dff4109aa3d8f73c12\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:23.661113 kubelet[2677]: I0909 21:30:23.661022 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.661113 kubelet[2677]: I0909 21:30:23.661036 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.661224 kubelet[2677]: I0909 21:30:23.661050 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:23.661224 kubelet[2677]: I0909 21:30:23.661066 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:30:23.895800 kubelet[2677]: E0909 21:30:23.895759 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:23.895926 kubelet[2677]: E0909 21:30:23.895850 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:23.895981 kubelet[2677]: E0909 21:30:23.895777 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:24.442952 kubelet[2677]: I0909 21:30:24.442743 2677 apiserver.go:52] "Watching apiserver" Sep 9 21:30:24.459843 kubelet[2677]: I0909 21:30:24.459809 2677 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 21:30:24.496787 kubelet[2677]: I0909 21:30:24.496762 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:24.499323 kubelet[2677]: I0909 21:30:24.497175 2677 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:24.499323 kubelet[2677]: E0909 21:30:24.497649 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:24.504570 kubelet[2677]: E0909 21:30:24.502758 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 21:30:24.504570 kubelet[2677]: E0909 21:30:24.502771 2677 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:30:24.504570 kubelet[2677]: E0909 21:30:24.502908 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:24.504570 kubelet[2677]: E0909 21:30:24.502956 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:24.517283 kubelet[2677]: I0909 21:30:24.517092 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.517081737 podStartE2EDuration="1.517081737s" podCreationTimestamp="2025-09-09 21:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:30:24.51685103 +0000 UTC m=+1.141788154" watchObservedRunningTime="2025-09-09 21:30:24.517081737 +0000 UTC m=+1.142018821" Sep 9 21:30:24.524517 kubelet[2677]: I0909 21:30:24.524473 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.524461042 podStartE2EDuration="1.524461042s" podCreationTimestamp="2025-09-09 21:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:30:24.524330736 +0000 UTC m=+1.149267820" watchObservedRunningTime="2025-09-09 21:30:24.524461042 +0000 UTC m=+1.149398126" Sep 9 21:30:24.539540 kubelet[2677]: I0909 21:30:24.539496 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5394836659999998 podStartE2EDuration="2.539483666s" podCreationTimestamp="2025-09-09 21:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:30:24.532067251 +0000 UTC m=+1.157004295" watchObservedRunningTime="2025-09-09 21:30:24.539483666 +0000 UTC m=+1.164420710" Sep 9 21:30:25.499102 kubelet[2677]: E0909 21:30:25.498726 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:25.499102 kubelet[2677]: E0909 21:30:25.498964 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:25.499102 kubelet[2677]: E0909 21:30:25.499039 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:26.500239 kubelet[2677]: E0909 21:30:26.500204 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:26.500809 kubelet[2677]: E0909 21:30:26.500779 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:29.344324 kubelet[2677]: I0909 21:30:29.344295 2677 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:30:29.344663 containerd[1526]: time="2025-09-09T21:30:29.344572234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:30:29.344831 kubelet[2677]: I0909 21:30:29.344790 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:30:29.558223 kubelet[2677]: E0909 21:30:29.558186 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.072356 kubelet[2677]: E0909 21:30:30.072328 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.433116 systemd[1]: Created slice kubepods-besteffort-pod5f4637c7_dbde_4d56_b726_28f8e2e1ce27.slice - libcontainer container kubepods-besteffort-pod5f4637c7_dbde_4d56_b726_28f8e2e1ce27.slice. Sep 9 21:30:30.505287 kubelet[2677]: E0909 21:30:30.505187 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.506683 kubelet[2677]: I0909 21:30:30.506151 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f4637c7-dbde-4d56-b726-28f8e2e1ce27-lib-modules\") pod \"kube-proxy-kvrbp\" (UID: \"5f4637c7-dbde-4d56-b726-28f8e2e1ce27\") " pod="kube-system/kube-proxy-kvrbp" Sep 9 21:30:30.506683 kubelet[2677]: I0909 21:30:30.506177 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tp2\" (UniqueName: \"kubernetes.io/projected/5f4637c7-dbde-4d56-b726-28f8e2e1ce27-kube-api-access-q9tp2\") pod \"kube-proxy-kvrbp\" (UID: \"5f4637c7-dbde-4d56-b726-28f8e2e1ce27\") " pod="kube-system/kube-proxy-kvrbp" Sep 9 21:30:30.506683 kubelet[2677]: I0909 21:30:30.506197 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f4637c7-dbde-4d56-b726-28f8e2e1ce27-kube-proxy\") pod \"kube-proxy-kvrbp\" (UID: \"5f4637c7-dbde-4d56-b726-28f8e2e1ce27\") " pod="kube-system/kube-proxy-kvrbp" Sep 9 21:30:30.506683 kubelet[2677]: I0909 21:30:30.506213 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f4637c7-dbde-4d56-b726-28f8e2e1ce27-xtables-lock\") pod \"kube-proxy-kvrbp\" (UID: \"5f4637c7-dbde-4d56-b726-28f8e2e1ce27\") " pod="kube-system/kube-proxy-kvrbp" Sep 9 21:30:30.506683 kubelet[2677]: E0909 21:30:30.506519 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.546642 systemd[1]: Created slice kubepods-besteffort-podb5b8f0fd_ba65_4787_ba86_eafa9a1ec584.slice - libcontainer container kubepods-besteffort-podb5b8f0fd_ba65_4787_ba86_eafa9a1ec584.slice. Sep 9 21:30:30.607093 kubelet[2677]: I0909 21:30:30.607063 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxwkl\" (UniqueName: \"kubernetes.io/projected/b5b8f0fd-ba65-4787-ba86-eafa9a1ec584-kube-api-access-jxwkl\") pod \"tigera-operator-755d956888-fmhgp\" (UID: \"b5b8f0fd-ba65-4787-ba86-eafa9a1ec584\") " pod="tigera-operator/tigera-operator-755d956888-fmhgp" Sep 9 21:30:30.607197 kubelet[2677]: I0909 21:30:30.607121 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5b8f0fd-ba65-4787-ba86-eafa9a1ec584-var-lib-calico\") pod \"tigera-operator-755d956888-fmhgp\" (UID: \"b5b8f0fd-ba65-4787-ba86-eafa9a1ec584\") " pod="tigera-operator/tigera-operator-755d956888-fmhgp" Sep 9 21:30:30.744505 kubelet[2677]: E0909 21:30:30.744392 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.744995 containerd[1526]: time="2025-09-09T21:30:30.744890021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvrbp,Uid:5f4637c7-dbde-4d56-b726-28f8e2e1ce27,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:30.759407 containerd[1526]: time="2025-09-09T21:30:30.759356770Z" level=info msg="connecting to shim 7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a" address="unix:///run/containerd/s/0cd1889aa69babc125a4e4b8140eb2cced69fc7027662ea0ef4f27d58bc2acc3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:30.784447 systemd[1]: Started cri-containerd-7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a.scope - libcontainer container 7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a. Sep 9 21:30:30.805788 containerd[1526]: time="2025-09-09T21:30:30.805753305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvrbp,Uid:5f4637c7-dbde-4d56-b726-28f8e2e1ce27,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a\"" Sep 9 21:30:30.806464 kubelet[2677]: E0909 21:30:30.806443 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:30.810340 containerd[1526]: time="2025-09-09T21:30:30.810301655Z" level=info msg="CreateContainer within sandbox \"7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:30:30.819833 containerd[1526]: time="2025-09-09T21:30:30.819166322Z" level=info msg="Container bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:30.826409 containerd[1526]: time="2025-09-09T21:30:30.826372818Z" level=info msg="CreateContainer within sandbox \"7eb8f7c0ffa0f4d9307e227400ebc7c096d38ff6e4fc65b621620f148b64ec2a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453\"" Sep 9 21:30:30.827001 containerd[1526]: time="2025-09-09T21:30:30.826975655Z" level=info msg="StartContainer for \"bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453\"" Sep 9 21:30:30.828396 containerd[1526]: time="2025-09-09T21:30:30.828365048Z" level=info msg="connecting to shim bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453" address="unix:///run/containerd/s/0cd1889aa69babc125a4e4b8140eb2cced69fc7027662ea0ef4f27d58bc2acc3" protocol=ttrpc version=3 Sep 9 21:30:30.845485 systemd[1]: Started cri-containerd-bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453.scope - libcontainer container bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453. Sep 9 21:30:30.849524 containerd[1526]: time="2025-09-09T21:30:30.849477365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fmhgp,Uid:b5b8f0fd-ba65-4787-ba86-eafa9a1ec584,Namespace:tigera-operator,Attempt:0,}" Sep 9 21:30:30.866441 containerd[1526]: time="2025-09-09T21:30:30.866403970Z" level=info msg="connecting to shim de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39" address="unix:///run/containerd/s/cfdbcfd67ef75009ef0626294a0f90fc487ba4bb813b83e54128ec19cfb11ec6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:30.891093 containerd[1526]: time="2025-09-09T21:30:30.891063058Z" level=info msg="StartContainer for \"bcfcf076fc45119edd54cd0e1ef73fa8de32d1ce5d99866bbfeb75cbc092f453\" returns successfully" Sep 9 21:30:30.894465 systemd[1]: Started cri-containerd-de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39.scope - libcontainer container de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39. Sep 9 21:30:30.927284 containerd[1526]: time="2025-09-09T21:30:30.927206255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fmhgp,Uid:b5b8f0fd-ba65-4787-ba86-eafa9a1ec584,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39\"" Sep 9 21:30:30.929348 containerd[1526]: time="2025-09-09T21:30:30.929317122Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 21:30:31.508755 kubelet[2677]: E0909 21:30:31.508725 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:31.509051 kubelet[2677]: E0909 21:30:31.508844 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:31.509238 kubelet[2677]: E0909 21:30:31.509171 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:31.519838 kubelet[2677]: I0909 21:30:31.519745 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kvrbp" podStartSLOduration=1.519731519 podStartE2EDuration="1.519731519s" podCreationTimestamp="2025-09-09 21:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:30:31.519437176 +0000 UTC m=+8.144374220" watchObservedRunningTime="2025-09-09 21:30:31.519731519 +0000 UTC m=+8.144668603" Sep 9 21:30:32.413695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282185684.mount: Deactivated successfully. Sep 9 21:30:33.133571 containerd[1526]: time="2025-09-09T21:30:33.133525185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:33.134110 containerd[1526]: time="2025-09-09T21:30:33.134078693Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 21:30:33.134949 containerd[1526]: time="2025-09-09T21:30:33.134924563Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:33.137090 containerd[1526]: time="2025-09-09T21:30:33.137044222Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:33.138144 containerd[1526]: time="2025-09-09T21:30:33.137769785Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.208418281s" Sep 9 21:30:33.138144 containerd[1526]: time="2025-09-09T21:30:33.137798041Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 21:30:33.144708 containerd[1526]: time="2025-09-09T21:30:33.144680428Z" level=info msg="CreateContainer within sandbox \"de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 21:30:33.150332 containerd[1526]: time="2025-09-09T21:30:33.150301394Z" level=info msg="Container c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:33.156026 containerd[1526]: time="2025-09-09T21:30:33.155984314Z" level=info msg="CreateContainer within sandbox \"de369250974a6bff4ee0281e4bd4296cbb9d6fcbf5e120ad9ab45a75303dbc39\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a\"" Sep 9 21:30:33.156955 containerd[1526]: time="2025-09-09T21:30:33.156363805Z" level=info msg="StartContainer for \"c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a\"" Sep 9 21:30:33.157201 containerd[1526]: time="2025-09-09T21:30:33.157175376Z" level=info msg="connecting to shim c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a" address="unix:///run/containerd/s/cfdbcfd67ef75009ef0626294a0f90fc487ba4bb813b83e54128ec19cfb11ec6" protocol=ttrpc version=3 Sep 9 21:30:33.182449 systemd[1]: Started cri-containerd-c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a.scope - libcontainer container c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a. Sep 9 21:30:33.207803 containerd[1526]: time="2025-09-09T21:30:33.207757222Z" level=info msg="StartContainer for \"c2f470e08858c3c8695b2439cbd0d8fe68f082426bf5dde6d57b8e4311ca7e1a\" returns successfully" Sep 9 21:30:36.192367 kubelet[2677]: E0909 21:30:36.192099 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:36.199380 kubelet[2677]: I0909 21:30:36.199328 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-fmhgp" podStartSLOduration=3.9876200429999997 podStartE2EDuration="6.199304723s" podCreationTimestamp="2025-09-09 21:30:30 +0000 UTC" firstStartedPulling="2025-09-09 21:30:30.928730176 +0000 UTC m=+7.553667220" lastFinishedPulling="2025-09-09 21:30:33.140414816 +0000 UTC m=+9.765351900" observedRunningTime="2025-09-09 21:30:33.522529491 +0000 UTC m=+10.147466615" watchObservedRunningTime="2025-09-09 21:30:36.199304723 +0000 UTC m=+12.824241807" Sep 9 21:30:38.313544 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 9 21:30:38.316755 sshd[1744]: Connection closed by 10.0.0.1 port 36664 Sep 9 21:30:38.317518 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 9 21:30:38.322509 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:36664.service: Deactivated successfully. Sep 9 21:30:38.324583 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:30:38.324910 systemd[1]: session-7.scope: Consumed 6.019s CPU time, 217.4M memory peak. Sep 9 21:30:38.325913 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:30:38.327086 systemd-logind[1504]: Removed session 7. Sep 9 21:30:40.576142 update_engine[1505]: I20250909 21:30:40.575331 1505 update_attempter.cc:509] Updating boot flags... Sep 9 21:30:43.790844 systemd[1]: Created slice kubepods-besteffort-pod943a571f_9d15_4b8a_ba6f_3a1b95137231.slice - libcontainer container kubepods-besteffort-pod943a571f_9d15_4b8a_ba6f_3a1b95137231.slice. Sep 9 21:30:43.793361 kubelet[2677]: I0909 21:30:43.793316 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943a571f-9d15-4b8a-ba6f-3a1b95137231-tigera-ca-bundle\") pod \"calico-typha-5995fd5b7-hncb8\" (UID: \"943a571f-9d15-4b8a-ba6f-3a1b95137231\") " pod="calico-system/calico-typha-5995fd5b7-hncb8" Sep 9 21:30:43.793601 kubelet[2677]: I0909 21:30:43.793406 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/943a571f-9d15-4b8a-ba6f-3a1b95137231-typha-certs\") pod \"calico-typha-5995fd5b7-hncb8\" (UID: \"943a571f-9d15-4b8a-ba6f-3a1b95137231\") " pod="calico-system/calico-typha-5995fd5b7-hncb8" Sep 9 21:30:43.793601 kubelet[2677]: I0909 21:30:43.793434 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spj4t\" (UniqueName: \"kubernetes.io/projected/943a571f-9d15-4b8a-ba6f-3a1b95137231-kube-api-access-spj4t\") pod \"calico-typha-5995fd5b7-hncb8\" (UID: \"943a571f-9d15-4b8a-ba6f-3a1b95137231\") " pod="calico-system/calico-typha-5995fd5b7-hncb8" Sep 9 21:30:44.064234 systemd[1]: Created slice kubepods-besteffort-pod3e8a46d2_9c54_42e2_b11c_52eac85e4019.slice - libcontainer container kubepods-besteffort-pod3e8a46d2_9c54_42e2_b11c_52eac85e4019.slice. Sep 9 21:30:44.095190 kubelet[2677]: I0909 21:30:44.095147 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e8a46d2-9c54-42e2-b11c-52eac85e4019-node-certs\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095190 kubelet[2677]: I0909 21:30:44.095190 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-cni-bin-dir\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095356 kubelet[2677]: I0909 21:30:44.095217 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-cni-net-dir\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095356 kubelet[2677]: I0909 21:30:44.095233 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e8a46d2-9c54-42e2-b11c-52eac85e4019-tigera-ca-bundle\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095356 kubelet[2677]: I0909 21:30:44.095250 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-var-run-calico\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095356 kubelet[2677]: I0909 21:30:44.095264 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk6l4\" (UniqueName: \"kubernetes.io/projected/3e8a46d2-9c54-42e2-b11c-52eac85e4019-kube-api-access-zk6l4\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095356 kubelet[2677]: I0909 21:30:44.095306 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-var-lib-calico\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095472 kubelet[2677]: I0909 21:30:44.095322 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-lib-modules\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095472 kubelet[2677]: I0909 21:30:44.095334 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-xtables-lock\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095472 kubelet[2677]: I0909 21:30:44.095350 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-cni-log-dir\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095472 kubelet[2677]: I0909 21:30:44.095377 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-flexvol-driver-host\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.095472 kubelet[2677]: I0909 21:30:44.095399 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e8a46d2-9c54-42e2-b11c-52eac85e4019-policysync\") pod \"calico-node-h72rw\" (UID: \"3e8a46d2-9c54-42e2-b11c-52eac85e4019\") " pod="calico-system/calico-node-h72rw" Sep 9 21:30:44.102723 kubelet[2677]: E0909 21:30:44.102381 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:44.102856 containerd[1526]: time="2025-09-09T21:30:44.102815675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5995fd5b7-hncb8,Uid:943a571f-9d15-4b8a-ba6f-3a1b95137231,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:44.138917 containerd[1526]: time="2025-09-09T21:30:44.138810416Z" level=info msg="connecting to shim a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c" address="unix:///run/containerd/s/c44adab94412cd995f8a5025bd5b68c8a7039f8b13b7e5ccd276fd2ecd5f132d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:44.187461 systemd[1]: Started cri-containerd-a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c.scope - libcontainer container a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c. Sep 9 21:30:44.203671 kubelet[2677]: E0909 21:30:44.203526 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.203671 kubelet[2677]: W0909 21:30:44.203548 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.206656 kubelet[2677]: E0909 21:30:44.206615 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.211762 kubelet[2677]: E0909 21:30:44.211745 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.211762 kubelet[2677]: W0909 21:30:44.211760 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.211864 kubelet[2677]: E0909 21:30:44.211773 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.266400 containerd[1526]: time="2025-09-09T21:30:44.266224922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5995fd5b7-hncb8,Uid:943a571f-9d15-4b8a-ba6f-3a1b95137231,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c\"" Sep 9 21:30:44.269672 kubelet[2677]: E0909 21:30:44.269631 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:44.271013 containerd[1526]: time="2025-09-09T21:30:44.270981301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 21:30:44.285578 kubelet[2677]: E0909 21:30:44.285249 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j2xp" podUID="0c443cb4-c154-434b-8539-5fef8fd1056c" Sep 9 21:30:44.288587 kubelet[2677]: E0909 21:30:44.288397 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.288587 kubelet[2677]: W0909 21:30:44.288421 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.288587 kubelet[2677]: E0909 21:30:44.288440 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.289178 kubelet[2677]: E0909 21:30:44.289138 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.289178 kubelet[2677]: W0909 21:30:44.289153 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.289178 kubelet[2677]: E0909 21:30:44.289197 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.290743 kubelet[2677]: E0909 21:30:44.290033 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.290743 kubelet[2677]: W0909 21:30:44.290298 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.290743 kubelet[2677]: E0909 21:30:44.290317 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.291382 kubelet[2677]: E0909 21:30:44.291357 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.291382 kubelet[2677]: W0909 21:30:44.291374 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.291507 kubelet[2677]: E0909 21:30:44.291388 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.292253 kubelet[2677]: E0909 21:30:44.292231 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.292253 kubelet[2677]: W0909 21:30:44.292247 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.292436 kubelet[2677]: E0909 21:30:44.292260 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.292436 kubelet[2677]: E0909 21:30:44.292435 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.292436 kubelet[2677]: W0909 21:30:44.292443 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.292436 kubelet[2677]: E0909 21:30:44.292462 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.293602 kubelet[2677]: E0909 21:30:44.293373 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.293602 kubelet[2677]: W0909 21:30:44.293390 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.293602 kubelet[2677]: E0909 21:30:44.293404 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.293929 kubelet[2677]: E0909 21:30:44.293886 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.293929 kubelet[2677]: W0909 21:30:44.293901 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.293929 kubelet[2677]: E0909 21:30:44.293914 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.294578 kubelet[2677]: E0909 21:30:44.294557 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.294578 kubelet[2677]: W0909 21:30:44.294575 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.294644 kubelet[2677]: E0909 21:30:44.294587 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.295828 kubelet[2677]: E0909 21:30:44.295770 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.295828 kubelet[2677]: W0909 21:30:44.295793 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.295828 kubelet[2677]: E0909 21:30:44.295807 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.297331 kubelet[2677]: E0909 21:30:44.297255 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.297331 kubelet[2677]: W0909 21:30:44.297291 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.297331 kubelet[2677]: E0909 21:30:44.297310 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.297764 kubelet[2677]: E0909 21:30:44.297745 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.297764 kubelet[2677]: W0909 21:30:44.297760 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.297846 kubelet[2677]: E0909 21:30:44.297774 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.298002 kubelet[2677]: E0909 21:30:44.297979 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.298002 kubelet[2677]: W0909 21:30:44.297993 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.298002 kubelet[2677]: E0909 21:30:44.298003 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.298645 kubelet[2677]: E0909 21:30:44.298619 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.298645 kubelet[2677]: W0909 21:30:44.298639 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.298732 kubelet[2677]: E0909 21:30:44.298652 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.299232 kubelet[2677]: E0909 21:30:44.299141 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.299232 kubelet[2677]: W0909 21:30:44.299161 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.299232 kubelet[2677]: E0909 21:30:44.299173 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.299690 kubelet[2677]: E0909 21:30:44.299592 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.299690 kubelet[2677]: W0909 21:30:44.299606 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.299690 kubelet[2677]: E0909 21:30:44.299618 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.300207 kubelet[2677]: E0909 21:30:44.300088 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.300207 kubelet[2677]: W0909 21:30:44.300105 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.300207 kubelet[2677]: E0909 21:30:44.300118 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.300465 kubelet[2677]: E0909 21:30:44.300365 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.300465 kubelet[2677]: W0909 21:30:44.300376 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.300465 kubelet[2677]: E0909 21:30:44.300385 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.300598 kubelet[2677]: E0909 21:30:44.300563 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.300598 kubelet[2677]: W0909 21:30:44.300577 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.300598 kubelet[2677]: E0909 21:30:44.300586 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.301063 kubelet[2677]: E0909 21:30:44.300756 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.301063 kubelet[2677]: W0909 21:30:44.300764 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.301063 kubelet[2677]: E0909 21:30:44.300772 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.301526 kubelet[2677]: E0909 21:30:44.301465 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.301672 kubelet[2677]: W0909 21:30:44.301654 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.301862 kubelet[2677]: E0909 21:30:44.301812 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.302211 kubelet[2677]: I0909 21:30:44.302055 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c443cb4-c154-434b-8539-5fef8fd1056c-kubelet-dir\") pod \"csi-node-driver-6j2xp\" (UID: \"0c443cb4-c154-434b-8539-5fef8fd1056c\") " pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:44.302315 kubelet[2677]: E0909 21:30:44.302286 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.302315 kubelet[2677]: W0909 21:30:44.302304 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.302424 kubelet[2677]: E0909 21:30:44.302318 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.303411 kubelet[2677]: E0909 21:30:44.303377 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.303411 kubelet[2677]: W0909 21:30:44.303396 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.303411 kubelet[2677]: E0909 21:30:44.303409 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.303784 kubelet[2677]: E0909 21:30:44.303617 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.303784 kubelet[2677]: W0909 21:30:44.303627 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.303784 kubelet[2677]: E0909 21:30:44.303637 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.303784 kubelet[2677]: I0909 21:30:44.303661 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c443cb4-c154-434b-8539-5fef8fd1056c-varrun\") pod \"csi-node-driver-6j2xp\" (UID: \"0c443cb4-c154-434b-8539-5fef8fd1056c\") " pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:44.303784 kubelet[2677]: E0909 21:30:44.303828 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.303784 kubelet[2677]: W0909 21:30:44.303839 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.304004 kubelet[2677]: E0909 21:30:44.303848 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.304004 kubelet[2677]: I0909 21:30:44.303869 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jdl\" (UniqueName: \"kubernetes.io/projected/0c443cb4-c154-434b-8539-5fef8fd1056c-kube-api-access-g9jdl\") pod \"csi-node-driver-6j2xp\" (UID: \"0c443cb4-c154-434b-8539-5fef8fd1056c\") " pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:44.304052 kubelet[2677]: E0909 21:30:44.304003 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.304052 kubelet[2677]: W0909 21:30:44.304014 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.304052 kubelet[2677]: E0909 21:30:44.304034 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.304392 kubelet[2677]: E0909 21:30:44.304224 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.304392 kubelet[2677]: W0909 21:30:44.304235 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.304392 kubelet[2677]: E0909 21:30:44.304243 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.304898 kubelet[2677]: E0909 21:30:44.304485 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.304898 kubelet[2677]: W0909 21:30:44.304495 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.304898 kubelet[2677]: E0909 21:30:44.304506 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.304898 kubelet[2677]: I0909 21:30:44.304544 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c443cb4-c154-434b-8539-5fef8fd1056c-registration-dir\") pod \"csi-node-driver-6j2xp\" (UID: \"0c443cb4-c154-434b-8539-5fef8fd1056c\") " pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:44.304898 kubelet[2677]: E0909 21:30:44.304780 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.304898 kubelet[2677]: W0909 21:30:44.304794 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.304898 kubelet[2677]: E0909 21:30:44.304806 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.304898 kubelet[2677]: E0909 21:30:44.304974 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.304898 kubelet[2677]: W0909 21:30:44.304984 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.305367 kubelet[2677]: E0909 21:30:44.304992 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.305474 kubelet[2677]: E0909 21:30:44.305445 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.305474 kubelet[2677]: W0909 21:30:44.305470 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.305483 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.305648 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.306223 kubelet[2677]: W0909 21:30:44.305655 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.305663 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.305809 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.306223 kubelet[2677]: W0909 21:30:44.305815 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.305823 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.306223 kubelet[2677]: I0909 21:30:44.305847 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c443cb4-c154-434b-8539-5fef8fd1056c-socket-dir\") pod \"csi-node-driver-6j2xp\" (UID: \"0c443cb4-c154-434b-8539-5fef8fd1056c\") " pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:44.306223 kubelet[2677]: E0909 21:30:44.306038 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.306488 kubelet[2677]: W0909 21:30:44.306046 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.306488 kubelet[2677]: E0909 21:30:44.306054 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.306488 kubelet[2677]: E0909 21:30:44.306212 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.306488 kubelet[2677]: W0909 21:30:44.306220 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.306488 kubelet[2677]: E0909 21:30:44.306228 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.372780 containerd[1526]: time="2025-09-09T21:30:44.371506854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h72rw,Uid:3e8a46d2-9c54-42e2-b11c-52eac85e4019,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:44.399807 containerd[1526]: time="2025-09-09T21:30:44.399764918Z" level=info msg="connecting to shim e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96" address="unix:///run/containerd/s/04ed0b479e81060786c1bbafd74a4658b77863883179fa86107945e3695e479f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:44.407606 kubelet[2677]: E0909 21:30:44.407573 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.407832 kubelet[2677]: W0909 21:30:44.407597 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.407832 kubelet[2677]: E0909 21:30:44.407636 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.408090 kubelet[2677]: E0909 21:30:44.408073 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.408125 kubelet[2677]: W0909 21:30:44.408108 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.408359 kubelet[2677]: E0909 21:30:44.408123 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.408550 kubelet[2677]: E0909 21:30:44.408528 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.408637 kubelet[2677]: W0909 21:30:44.408619 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.408695 kubelet[2677]: E0909 21:30:44.408683 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.408969 kubelet[2677]: E0909 21:30:44.408956 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.409121 kubelet[2677]: W0909 21:30:44.409019 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.409121 kubelet[2677]: E0909 21:30:44.409032 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.409326 kubelet[2677]: E0909 21:30:44.409311 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.409426 kubelet[2677]: W0909 21:30:44.409371 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.409426 kubelet[2677]: E0909 21:30:44.409385 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.409637 kubelet[2677]: E0909 21:30:44.409621 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.409673 kubelet[2677]: W0909 21:30:44.409637 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.409673 kubelet[2677]: E0909 21:30:44.409650 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.409964 kubelet[2677]: E0909 21:30:44.409950 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.410011 kubelet[2677]: W0909 21:30:44.409963 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.410011 kubelet[2677]: E0909 21:30:44.409987 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.410223 kubelet[2677]: E0909 21:30:44.410204 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.410223 kubelet[2677]: W0909 21:30:44.410220 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.410355 kubelet[2677]: E0909 21:30:44.410230 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.410576 kubelet[2677]: E0909 21:30:44.410540 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.410646 kubelet[2677]: W0909 21:30:44.410587 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.410646 kubelet[2677]: E0909 21:30:44.410600 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.410854 kubelet[2677]: E0909 21:30:44.410841 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.410854 kubelet[2677]: W0909 21:30:44.410853 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.410950 kubelet[2677]: E0909 21:30:44.410863 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.411089 kubelet[2677]: E0909 21:30:44.411076 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.411129 kubelet[2677]: W0909 21:30:44.411089 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.411129 kubelet[2677]: E0909 21:30:44.411099 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.411374 kubelet[2677]: E0909 21:30:44.411361 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.411411 kubelet[2677]: W0909 21:30:44.411399 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.411466 kubelet[2677]: E0909 21:30:44.411409 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.411626 kubelet[2677]: E0909 21:30:44.411612 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.411659 kubelet[2677]: W0909 21:30:44.411628 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.411659 kubelet[2677]: E0909 21:30:44.411636 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.411809 kubelet[2677]: E0909 21:30:44.411797 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.411809 kubelet[2677]: W0909 21:30:44.411807 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.411901 kubelet[2677]: E0909 21:30:44.411815 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.412013 kubelet[2677]: E0909 21:30:44.411969 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.412013 kubelet[2677]: W0909 21:30:44.411979 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.412013 kubelet[2677]: E0909 21:30:44.411987 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.412218 kubelet[2677]: E0909 21:30:44.412198 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.412251 kubelet[2677]: W0909 21:30:44.412219 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.412251 kubelet[2677]: E0909 21:30:44.412228 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.412632 kubelet[2677]: E0909 21:30:44.412603 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.412632 kubelet[2677]: W0909 21:30:44.412618 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.412632 kubelet[2677]: E0909 21:30:44.412629 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.412934 kubelet[2677]: E0909 21:30:44.412913 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.412934 kubelet[2677]: W0909 21:30:44.412926 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.413057 kubelet[2677]: E0909 21:30:44.412937 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.413645 kubelet[2677]: E0909 21:30:44.413129 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.413645 kubelet[2677]: W0909 21:30:44.413142 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.413645 kubelet[2677]: E0909 21:30:44.413152 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.413645 kubelet[2677]: E0909 21:30:44.413561 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.413645 kubelet[2677]: W0909 21:30:44.413572 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.413645 kubelet[2677]: E0909 21:30:44.413583 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.414214 kubelet[2677]: E0909 21:30:44.414177 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.414214 kubelet[2677]: W0909 21:30:44.414193 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.414214 kubelet[2677]: E0909 21:30:44.414205 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.414214 kubelet[2677]: E0909 21:30:44.414609 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.414214 kubelet[2677]: W0909 21:30:44.414618 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.414629 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.414994 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.415247 kubelet[2677]: W0909 21:30:44.415003 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.415014 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.415224 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.415247 kubelet[2677]: W0909 21:30:44.415233 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.415241 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.415458 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.415247 kubelet[2677]: W0909 21:30:44.415467 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.415247 kubelet[2677]: E0909 21:30:44.415476 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.422461 systemd[1]: Started cri-containerd-e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96.scope - libcontainer container e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96. Sep 9 21:30:44.426794 kubelet[2677]: E0909 21:30:44.426774 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:44.426794 kubelet[2677]: W0909 21:30:44.426793 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:44.426865 kubelet[2677]: E0909 21:30:44.426807 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:44.468329 containerd[1526]: time="2025-09-09T21:30:44.468266301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h72rw,Uid:3e8a46d2-9c54-42e2-b11c-52eac85e4019,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\"" Sep 9 21:30:45.278815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538835381.mount: Deactivated successfully. Sep 9 21:30:45.486539 kubelet[2677]: E0909 21:30:45.486502 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j2xp" podUID="0c443cb4-c154-434b-8539-5fef8fd1056c" Sep 9 21:30:45.899050 containerd[1526]: time="2025-09-09T21:30:45.899005343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:45.900279 containerd[1526]: time="2025-09-09T21:30:45.900220308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 21:30:45.900999 containerd[1526]: time="2025-09-09T21:30:45.900971493Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:45.902775 containerd[1526]: time="2025-09-09T21:30:45.902743866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:45.903588 containerd[1526]: time="2025-09-09T21:30:45.903558751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.63254316s" Sep 9 21:30:45.903637 containerd[1526]: time="2025-09-09T21:30:45.903589920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 21:30:45.904403 containerd[1526]: time="2025-09-09T21:30:45.904304295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 21:30:45.914945 containerd[1526]: time="2025-09-09T21:30:45.914910682Z" level=info msg="CreateContainer within sandbox \"a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 21:30:45.922299 containerd[1526]: time="2025-09-09T21:30:45.922048307Z" level=info msg="Container 1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:45.936171 containerd[1526]: time="2025-09-09T21:30:45.936113974Z" level=info msg="CreateContainer within sandbox \"a6555027151b0f416a3bbb72299cc85aecf8b671c6df3716fd5894557519052c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4\"" Sep 9 21:30:45.937459 containerd[1526]: time="2025-09-09T21:30:45.937430930Z" level=info msg="StartContainer for \"1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4\"" Sep 9 21:30:45.950723 containerd[1526]: time="2025-09-09T21:30:45.939202982Z" level=info msg="connecting to shim 1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4" address="unix:///run/containerd/s/c44adab94412cd995f8a5025bd5b68c8a7039f8b13b7e5ccd276fd2ecd5f132d" protocol=ttrpc version=3 Sep 9 21:30:45.982437 systemd[1]: Started cri-containerd-1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4.scope - libcontainer container 1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4. Sep 9 21:30:46.015020 containerd[1526]: time="2025-09-09T21:30:46.014984728Z" level=info msg="StartContainer for \"1d5a574101987eecf6048ed9d486a3157e851d9b1f5168d3b612d60dc4efa3e4\" returns successfully" Sep 9 21:30:46.573282 kubelet[2677]: E0909 21:30:46.572973 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:46.592736 kubelet[2677]: I0909 21:30:46.592663 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5995fd5b7-hncb8" podStartSLOduration=1.959271456 podStartE2EDuration="3.59264667s" podCreationTimestamp="2025-09-09 21:30:43 +0000 UTC" firstStartedPulling="2025-09-09 21:30:44.270739584 +0000 UTC m=+20.895676668" lastFinishedPulling="2025-09-09 21:30:45.904114798 +0000 UTC m=+22.529051882" observedRunningTime="2025-09-09 21:30:46.585303563 +0000 UTC m=+23.210240647" watchObservedRunningTime="2025-09-09 21:30:46.59264667 +0000 UTC m=+23.217583754" Sep 9 21:30:46.616310 kubelet[2677]: E0909 21:30:46.616283 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.616310 kubelet[2677]: W0909 21:30:46.616306 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.618463 kubelet[2677]: E0909 21:30:46.618426 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.618645 kubelet[2677]: E0909 21:30:46.618629 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.618683 kubelet[2677]: W0909 21:30:46.618641 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.618715 kubelet[2677]: E0909 21:30:46.618686 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.618836 kubelet[2677]: E0909 21:30:46.618820 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.618836 kubelet[2677]: W0909 21:30:46.618830 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.618886 kubelet[2677]: E0909 21:30:46.618838 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619027 kubelet[2677]: E0909 21:30:46.619015 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619027 kubelet[2677]: W0909 21:30:46.619026 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.619093 kubelet[2677]: E0909 21:30:46.619034 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619309 kubelet[2677]: E0909 21:30:46.619296 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619309 kubelet[2677]: W0909 21:30:46.619308 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.619365 kubelet[2677]: E0909 21:30:46.619319 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619471 kubelet[2677]: E0909 21:30:46.619459 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619471 kubelet[2677]: W0909 21:30:46.619469 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.619520 kubelet[2677]: E0909 21:30:46.619477 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619602 kubelet[2677]: E0909 21:30:46.619592 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619624 kubelet[2677]: W0909 21:30:46.619601 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.619624 kubelet[2677]: E0909 21:30:46.619611 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619776 kubelet[2677]: E0909 21:30:46.619757 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619776 kubelet[2677]: W0909 21:30:46.619767 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.619776 kubelet[2677]: E0909 21:30:46.619775 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.619970 kubelet[2677]: E0909 21:30:46.619954 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.619970 kubelet[2677]: W0909 21:30:46.619967 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620067 kubelet[2677]: E0909 21:30:46.619976 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620125 kubelet[2677]: E0909 21:30:46.620103 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620125 kubelet[2677]: W0909 21:30:46.620113 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620125 kubelet[2677]: E0909 21:30:46.620120 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620287 kubelet[2677]: E0909 21:30:46.620261 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620320 kubelet[2677]: W0909 21:30:46.620290 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620320 kubelet[2677]: E0909 21:30:46.620299 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620430 kubelet[2677]: E0909 21:30:46.620418 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620430 kubelet[2677]: W0909 21:30:46.620427 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620475 kubelet[2677]: E0909 21:30:46.620435 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620608 kubelet[2677]: E0909 21:30:46.620598 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620608 kubelet[2677]: W0909 21:30:46.620607 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620652 kubelet[2677]: E0909 21:30:46.620614 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620742 kubelet[2677]: E0909 21:30:46.620732 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620766 kubelet[2677]: W0909 21:30:46.620742 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620766 kubelet[2677]: E0909 21:30:46.620749 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.620875 kubelet[2677]: E0909 21:30:46.620865 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.620900 kubelet[2677]: W0909 21:30:46.620876 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.620900 kubelet[2677]: E0909 21:30:46.620885 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.628257 kubelet[2677]: E0909 21:30:46.628217 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.628257 kubelet[2677]: W0909 21:30:46.628236 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.628257 kubelet[2677]: E0909 21:30:46.628248 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.628439 kubelet[2677]: E0909 21:30:46.628425 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.628439 kubelet[2677]: W0909 21:30:46.628436 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.628481 kubelet[2677]: E0909 21:30:46.628445 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.628709 kubelet[2677]: E0909 21:30:46.628680 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.628709 kubelet[2677]: W0909 21:30:46.628697 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.628764 kubelet[2677]: E0909 21:30:46.628709 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.628928 kubelet[2677]: E0909 21:30:46.628916 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.628928 kubelet[2677]: W0909 21:30:46.628927 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.628972 kubelet[2677]: E0909 21:30:46.628936 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.629086 kubelet[2677]: E0909 21:30:46.629075 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.629107 kubelet[2677]: W0909 21:30:46.629086 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.629107 kubelet[2677]: E0909 21:30:46.629095 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.629318 kubelet[2677]: E0909 21:30:46.629293 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.629318 kubelet[2677]: W0909 21:30:46.629305 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.629318 kubelet[2677]: E0909 21:30:46.629313 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.629540 kubelet[2677]: E0909 21:30:46.629524 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.629540 kubelet[2677]: W0909 21:30:46.629537 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.629598 kubelet[2677]: E0909 21:30:46.629547 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.629749 kubelet[2677]: E0909 21:30:46.629737 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.629749 kubelet[2677]: W0909 21:30:46.629749 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.629805 kubelet[2677]: E0909 21:30:46.629757 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.629913 kubelet[2677]: E0909 21:30:46.629895 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.629913 kubelet[2677]: W0909 21:30:46.629905 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.629913 kubelet[2677]: E0909 21:30:46.629912 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630049 kubelet[2677]: E0909 21:30:46.630037 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630049 kubelet[2677]: W0909 21:30:46.630047 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630097 kubelet[2677]: E0909 21:30:46.630055 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630209 kubelet[2677]: E0909 21:30:46.630198 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630209 kubelet[2677]: W0909 21:30:46.630207 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630262 kubelet[2677]: E0909 21:30:46.630215 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630366 kubelet[2677]: E0909 21:30:46.630355 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630387 kubelet[2677]: W0909 21:30:46.630365 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630387 kubelet[2677]: E0909 21:30:46.630373 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630518 kubelet[2677]: E0909 21:30:46.630507 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630518 kubelet[2677]: W0909 21:30:46.630517 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630561 kubelet[2677]: E0909 21:30:46.630524 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630748 kubelet[2677]: E0909 21:30:46.630732 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630769 kubelet[2677]: W0909 21:30:46.630748 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630769 kubelet[2677]: E0909 21:30:46.630760 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.630889 kubelet[2677]: E0909 21:30:46.630878 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.630911 kubelet[2677]: W0909 21:30:46.630888 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.630911 kubelet[2677]: E0909 21:30:46.630898 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.631044 kubelet[2677]: E0909 21:30:46.631034 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.631067 kubelet[2677]: W0909 21:30:46.631044 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.631067 kubelet[2677]: E0909 21:30:46.631052 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.631394 kubelet[2677]: E0909 21:30:46.631378 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.631394 kubelet[2677]: W0909 21:30:46.631392 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.631450 kubelet[2677]: E0909 21:30:46.631403 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:46.633843 kubelet[2677]: E0909 21:30:46.633813 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 21:30:46.633843 kubelet[2677]: W0909 21:30:46.633832 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 21:30:46.633885 kubelet[2677]: E0909 21:30:46.633843 2677 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 21:30:47.006816 containerd[1526]: time="2025-09-09T21:30:47.006774624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:47.007298 containerd[1526]: time="2025-09-09T21:30:47.007239872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 21:30:47.008111 containerd[1526]: time="2025-09-09T21:30:47.008079222Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:47.010450 containerd[1526]: time="2025-09-09T21:30:47.010422824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:47.011419 containerd[1526]: time="2025-09-09T21:30:47.011390289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.107058946s" Sep 9 21:30:47.011487 containerd[1526]: time="2025-09-09T21:30:47.011421978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 21:30:47.016837 containerd[1526]: time="2025-09-09T21:30:47.016513613Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 21:30:47.025420 containerd[1526]: time="2025-09-09T21:30:47.024829772Z" level=info msg="Container d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:47.044179 containerd[1526]: time="2025-09-09T21:30:47.044078527Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\"" Sep 9 21:30:47.044608 containerd[1526]: time="2025-09-09T21:30:47.044556658Z" level=info msg="StartContainer for \"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\"" Sep 9 21:30:47.046171 containerd[1526]: time="2025-09-09T21:30:47.046144213Z" level=info msg="connecting to shim d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4" address="unix:///run/containerd/s/04ed0b479e81060786c1bbafd74a4658b77863883179fa86107945e3695e479f" protocol=ttrpc version=3 Sep 9 21:30:47.066458 systemd[1]: Started cri-containerd-d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4.scope - libcontainer container d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4. Sep 9 21:30:47.097127 containerd[1526]: time="2025-09-09T21:30:47.097084453Z" level=info msg="StartContainer for \"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\" returns successfully" Sep 9 21:30:47.110382 systemd[1]: cri-containerd-d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4.scope: Deactivated successfully. Sep 9 21:30:47.110735 systemd[1]: cri-containerd-d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4.scope: Consumed 28ms CPU time, 6.3M memory peak, 4.5M written to disk. Sep 9 21:30:47.131897 containerd[1526]: time="2025-09-09T21:30:47.131733708Z" level=info msg="received exit event container_id:\"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\" id:\"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\" pid:3388 exited_at:{seconds:1757453447 nanos:128132201}" Sep 9 21:30:47.138246 containerd[1526]: time="2025-09-09T21:30:47.138208202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\" id:\"d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4\" pid:3388 exited_at:{seconds:1757453447 nanos:128132201}" Sep 9 21:30:47.161435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d393e83ab8ecdc4ba166e1af8ab2fe2813bdaeee86f93324457f3c8e7b1001f4-rootfs.mount: Deactivated successfully. Sep 9 21:30:47.486569 kubelet[2677]: E0909 21:30:47.486520 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j2xp" podUID="0c443cb4-c154-434b-8539-5fef8fd1056c" Sep 9 21:30:47.576224 kubelet[2677]: I0909 21:30:47.576190 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 21:30:47.576578 kubelet[2677]: E0909 21:30:47.576557 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:47.578225 containerd[1526]: time="2025-09-09T21:30:47.578188735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 21:30:49.488305 kubelet[2677]: E0909 21:30:49.487254 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j2xp" podUID="0c443cb4-c154-434b-8539-5fef8fd1056c" Sep 9 21:30:50.632392 containerd[1526]: time="2025-09-09T21:30:50.632351236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:50.633407 containerd[1526]: time="2025-09-09T21:30:50.633236809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 21:30:50.635298 containerd[1526]: time="2025-09-09T21:30:50.634591454Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:50.636352 containerd[1526]: time="2025-09-09T21:30:50.636320029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:50.637044 containerd[1526]: time="2025-09-09T21:30:50.637009075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.05878221s" Sep 9 21:30:50.637139 containerd[1526]: time="2025-09-09T21:30:50.637125943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 21:30:50.640308 containerd[1526]: time="2025-09-09T21:30:50.640229448Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 21:30:50.646650 containerd[1526]: time="2025-09-09T21:30:50.646623344Z" level=info msg="Container 8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:50.655614 containerd[1526]: time="2025-09-09T21:30:50.655575975Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\"" Sep 9 21:30:50.656041 containerd[1526]: time="2025-09-09T21:30:50.656018001Z" level=info msg="StartContainer for \"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\"" Sep 9 21:30:50.657899 containerd[1526]: time="2025-09-09T21:30:50.657862764Z" level=info msg="connecting to shim 8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac" address="unix:///run/containerd/s/04ed0b479e81060786c1bbafd74a4658b77863883179fa86107945e3695e479f" protocol=ttrpc version=3 Sep 9 21:30:50.691148 systemd[1]: Started cri-containerd-8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac.scope - libcontainer container 8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac. Sep 9 21:30:50.764013 containerd[1526]: time="2025-09-09T21:30:50.763977135Z" level=info msg="StartContainer for \"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\" returns successfully" Sep 9 21:30:51.266834 systemd[1]: cri-containerd-8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac.scope: Deactivated successfully. Sep 9 21:30:51.267763 systemd[1]: cri-containerd-8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac.scope: Consumed 441ms CPU time, 175.4M memory peak, 2.1M read from disk, 165.8M written to disk. Sep 9 21:30:51.280895 containerd[1526]: time="2025-09-09T21:30:51.280853044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\" id:\"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\" pid:3447 exited_at:{seconds:1757453451 nanos:280556696}" Sep 9 21:30:51.281016 containerd[1526]: time="2025-09-09T21:30:51.280921940Z" level=info msg="received exit event container_id:\"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\" id:\"8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac\" pid:3447 exited_at:{seconds:1757453451 nanos:280556696}" Sep 9 21:30:51.303881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d93716d0f85b3f6a7f9a9e99b42cc8362f56256e44b7cd3d71aba8812fff3ac-rootfs.mount: Deactivated successfully. Sep 9 21:30:51.344796 kubelet[2677]: I0909 21:30:51.344548 2677 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 21:30:51.385896 systemd[1]: Created slice kubepods-besteffort-podc1d063b2_4e7c_4387_88ea_97ea5864199d.slice - libcontainer container kubepods-besteffort-podc1d063b2_4e7c_4387_88ea_97ea5864199d.slice. Sep 9 21:30:51.397247 systemd[1]: Created slice kubepods-burstable-pod191f0b65_63b1_4ca5_96ba_7eb89927ac9c.slice - libcontainer container kubepods-burstable-pod191f0b65_63b1_4ca5_96ba_7eb89927ac9c.slice. Sep 9 21:30:51.404297 systemd[1]: Created slice kubepods-besteffort-pod84004ccb_e3a6_4853_9973_88e70645f76b.slice - libcontainer container kubepods-besteffort-pod84004ccb_e3a6_4853_9973_88e70645f76b.slice. Sep 9 21:30:51.410226 systemd[1]: Created slice kubepods-besteffort-pod050ffd45_2c7e_4d21_bbce_94d9d12594fb.slice - libcontainer container kubepods-besteffort-pod050ffd45_2c7e_4d21_bbce_94d9d12594fb.slice. Sep 9 21:30:51.416391 systemd[1]: Created slice kubepods-burstable-pod7b9ca5bc_0aa5_4fb1_b460_96d3f44a6e0c.slice - libcontainer container kubepods-burstable-pod7b9ca5bc_0aa5_4fb1_b460_96d3f44a6e0c.slice. Sep 9 21:30:51.423537 systemd[1]: Created slice kubepods-besteffort-pod93e05870_9442_4c95_8771_d0f79c0248cf.slice - libcontainer container kubepods-besteffort-pod93e05870_9442_4c95_8771_d0f79c0248cf.slice. Sep 9 21:30:51.427985 systemd[1]: Created slice kubepods-besteffort-pod6ecd122a_2c80_485e_865e_2677d804d4cd.slice - libcontainer container kubepods-besteffort-pod6ecd122a_2c80_485e_865e_2677d804d4cd.slice. Sep 9 21:30:51.458876 kubelet[2677]: I0909 21:30:51.458825 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwmz\" (UniqueName: \"kubernetes.io/projected/c1d063b2-4e7c-4387-88ea-97ea5864199d-kube-api-access-mbwmz\") pod \"calico-kube-controllers-bff587889-2m72r\" (UID: \"c1d063b2-4e7c-4387-88ea-97ea5864199d\") " pod="calico-system/calico-kube-controllers-bff587889-2m72r" Sep 9 21:30:51.465814 kubelet[2677]: I0909 21:30:51.458908 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84004ccb-e3a6-4853-9973-88e70645f76b-calico-apiserver-certs\") pod \"calico-apiserver-7cf8986b4f-ncwmj\" (UID: \"84004ccb-e3a6-4853-9973-88e70645f76b\") " pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" Sep 9 21:30:51.465903 kubelet[2677]: I0909 21:30:51.465850 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ecd122a-2c80-485e-865e-2677d804d4cd-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-8754q\" (UID: \"6ecd122a-2c80-485e-865e-2677d804d4cd\") " pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.465903 kubelet[2677]: I0909 21:30:51.465874 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6ecd122a-2c80-485e-865e-2677d804d4cd-goldmane-key-pair\") pod \"goldmane-54d579b49d-8754q\" (UID: \"6ecd122a-2c80-485e-865e-2677d804d4cd\") " pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.465903 kubelet[2677]: I0909 21:30:51.465894 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-backend-key-pair\") pod \"whisker-6ccc44944c-dfhqf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " pod="calico-system/whisker-6ccc44944c-dfhqf" Sep 9 21:30:51.465973 kubelet[2677]: I0909 21:30:51.465910 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbt9x\" (UniqueName: \"kubernetes.io/projected/93e05870-9442-4c95-8771-d0f79c0248cf-kube-api-access-nbt9x\") pod \"whisker-6ccc44944c-dfhqf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " pod="calico-system/whisker-6ccc44944c-dfhqf" Sep 9 21:30:51.465973 kubelet[2677]: I0909 21:30:51.465925 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ecd122a-2c80-485e-865e-2677d804d4cd-config\") pod \"goldmane-54d579b49d-8754q\" (UID: \"6ecd122a-2c80-485e-865e-2677d804d4cd\") " pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.465973 kubelet[2677]: I0909 21:30:51.465943 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wsfh\" (UniqueName: \"kubernetes.io/projected/6ecd122a-2c80-485e-865e-2677d804d4cd-kube-api-access-2wsfh\") pod \"goldmane-54d579b49d-8754q\" (UID: \"6ecd122a-2c80-485e-865e-2677d804d4cd\") " pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.466035 kubelet[2677]: I0909 21:30:51.466007 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/191f0b65-63b1-4ca5-96ba-7eb89927ac9c-config-volume\") pod \"coredns-674b8bbfcf-pxf6p\" (UID: \"191f0b65-63b1-4ca5-96ba-7eb89927ac9c\") " pod="kube-system/coredns-674b8bbfcf-pxf6p" Sep 9 21:30:51.466112 kubelet[2677]: I0909 21:30:51.466094 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444h5\" (UniqueName: \"kubernetes.io/projected/7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c-kube-api-access-444h5\") pod \"coredns-674b8bbfcf-6lb7j\" (UID: \"7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c\") " pod="kube-system/coredns-674b8bbfcf-6lb7j" Sep 9 21:30:51.466136 kubelet[2677]: I0909 21:30:51.466125 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d063b2-4e7c-4387-88ea-97ea5864199d-tigera-ca-bundle\") pod \"calico-kube-controllers-bff587889-2m72r\" (UID: \"c1d063b2-4e7c-4387-88ea-97ea5864199d\") " pod="calico-system/calico-kube-controllers-bff587889-2m72r" Sep 9 21:30:51.466161 kubelet[2677]: I0909 21:30:51.466146 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c-config-volume\") pod \"coredns-674b8bbfcf-6lb7j\" (UID: \"7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c\") " pod="kube-system/coredns-674b8bbfcf-6lb7j" Sep 9 21:30:51.466198 kubelet[2677]: I0909 21:30:51.466174 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-ca-bundle\") pod \"whisker-6ccc44944c-dfhqf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " pod="calico-system/whisker-6ccc44944c-dfhqf" Sep 9 21:30:51.466222 kubelet[2677]: I0909 21:30:51.466204 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnx6\" (UniqueName: \"kubernetes.io/projected/84004ccb-e3a6-4853-9973-88e70645f76b-kube-api-access-jvnx6\") pod \"calico-apiserver-7cf8986b4f-ncwmj\" (UID: \"84004ccb-e3a6-4853-9973-88e70645f76b\") " pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" Sep 9 21:30:51.466245 kubelet[2677]: I0909 21:30:51.466221 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xss\" (UniqueName: \"kubernetes.io/projected/191f0b65-63b1-4ca5-96ba-7eb89927ac9c-kube-api-access-s7xss\") pod \"coredns-674b8bbfcf-pxf6p\" (UID: \"191f0b65-63b1-4ca5-96ba-7eb89927ac9c\") " pod="kube-system/coredns-674b8bbfcf-pxf6p" Sep 9 21:30:51.466245 kubelet[2677]: I0909 21:30:51.466237 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntzq2\" (UniqueName: \"kubernetes.io/projected/050ffd45-2c7e-4d21-bbce-94d9d12594fb-kube-api-access-ntzq2\") pod \"calico-apiserver-7cf8986b4f-5v74p\" (UID: \"050ffd45-2c7e-4d21-bbce-94d9d12594fb\") " pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" Sep 9 21:30:51.466300 kubelet[2677]: I0909 21:30:51.466253 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/050ffd45-2c7e-4d21-bbce-94d9d12594fb-calico-apiserver-certs\") pod \"calico-apiserver-7cf8986b4f-5v74p\" (UID: \"050ffd45-2c7e-4d21-bbce-94d9d12594fb\") " pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" Sep 9 21:30:51.491953 systemd[1]: Created slice kubepods-besteffort-pod0c443cb4_c154_434b_8539_5fef8fd1056c.slice - libcontainer container kubepods-besteffort-pod0c443cb4_c154_434b_8539_5fef8fd1056c.slice. Sep 9 21:30:51.494100 containerd[1526]: time="2025-09-09T21:30:51.494067434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j2xp,Uid:0c443cb4-c154-434b-8539-5fef8fd1056c,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:51.599749 containerd[1526]: time="2025-09-09T21:30:51.598463519Z" level=error msg="Failed to destroy network for sandbox \"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.599749 containerd[1526]: time="2025-09-09T21:30:51.599436423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 21:30:51.601422 containerd[1526]: time="2025-09-09T21:30:51.601318417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j2xp,Uid:0c443cb4-c154-434b-8539-5fef8fd1056c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.601586 kubelet[2677]: E0909 21:30:51.601522 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.601633 kubelet[2677]: E0909 21:30:51.601604 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:51.601671 kubelet[2677]: E0909 21:30:51.601643 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6j2xp" Sep 9 21:30:51.601840 kubelet[2677]: E0909 21:30:51.601811 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6j2xp_calico-system(0c443cb4-c154-434b-8539-5fef8fd1056c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6j2xp_calico-system(0c443cb4-c154-434b-8539-5fef8fd1056c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5fd0b977111c0a50c2c09438ea33f1b0ea1c0b8b95cbafca6336885d1a0d7f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6j2xp" podUID="0c443cb4-c154-434b-8539-5fef8fd1056c" Sep 9 21:30:51.656942 systemd[1]: run-netns-cni\x2dcad5788f\x2d0dba\x2d506a\x2d90dc\x2d7f061547a888.mount: Deactivated successfully. Sep 9 21:30:51.691216 containerd[1526]: time="2025-09-09T21:30:51.691167111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff587889-2m72r,Uid:c1d063b2-4e7c-4387-88ea-97ea5864199d,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:51.702419 kubelet[2677]: E0909 21:30:51.702335 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:51.705834 containerd[1526]: time="2025-09-09T21:30:51.705793960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxf6p,Uid:191f0b65-63b1-4ca5-96ba-7eb89927ac9c,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:51.711763 containerd[1526]: time="2025-09-09T21:30:51.711731968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-ncwmj,Uid:84004ccb-e3a6-4853-9973-88e70645f76b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:30:51.714842 containerd[1526]: time="2025-09-09T21:30:51.714799435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-5v74p,Uid:050ffd45-2c7e-4d21-bbce-94d9d12594fb,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:30:51.720033 kubelet[2677]: E0909 21:30:51.719987 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:30:51.724120 containerd[1526]: time="2025-09-09T21:30:51.724057247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lb7j,Uid:7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c,Namespace:kube-system,Attempt:0,}" Sep 9 21:30:51.727014 containerd[1526]: time="2025-09-09T21:30:51.726978120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccc44944c-dfhqf,Uid:93e05870-9442-4c95-8771-d0f79c0248cf,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:51.740958 containerd[1526]: time="2025-09-09T21:30:51.740871360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8754q,Uid:6ecd122a-2c80-485e-865e-2677d804d4cd,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:51.784698 containerd[1526]: time="2025-09-09T21:30:51.784589109Z" level=error msg="Failed to destroy network for sandbox \"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.787421 containerd[1526]: time="2025-09-09T21:30:51.787355586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff587889-2m72r,Uid:c1d063b2-4e7c-4387-88ea-97ea5864199d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.787849 kubelet[2677]: E0909 21:30:51.787804 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.787920 kubelet[2677]: E0909 21:30:51.787864 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bff587889-2m72r" Sep 9 21:30:51.787920 kubelet[2677]: E0909 21:30:51.787886 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bff587889-2m72r" Sep 9 21:30:51.787976 kubelet[2677]: E0909 21:30:51.787938 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bff587889-2m72r_calico-system(c1d063b2-4e7c-4387-88ea-97ea5864199d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bff587889-2m72r_calico-system(c1d063b2-4e7c-4387-88ea-97ea5864199d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fbdb93291cb0eb1634c50a78b547ed75b5bcf0809b0d2b4494b7801cd091e6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bff587889-2m72r" podUID="c1d063b2-4e7c-4387-88ea-97ea5864199d" Sep 9 21:30:51.808287 containerd[1526]: time="2025-09-09T21:30:51.808177062Z" level=error msg="Failed to destroy network for sandbox \"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.812171 containerd[1526]: time="2025-09-09T21:30:51.812127572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-5v74p,Uid:050ffd45-2c7e-4d21-bbce-94d9d12594fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.812534 kubelet[2677]: E0909 21:30:51.812492 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.812614 kubelet[2677]: E0909 21:30:51.812553 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" Sep 9 21:30:51.812614 kubelet[2677]: E0909 21:30:51.812573 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" Sep 9 21:30:51.812658 kubelet[2677]: E0909 21:30:51.812615 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf8986b4f-5v74p_calico-apiserver(050ffd45-2c7e-4d21-bbce-94d9d12594fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf8986b4f-5v74p_calico-apiserver(050ffd45-2c7e-4d21-bbce-94d9d12594fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd51f90a45e8b4396290a2ddd1afbff352627751046a5f03ca39794a0d126b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" podUID="050ffd45-2c7e-4d21-bbce-94d9d12594fb" Sep 9 21:30:51.815138 containerd[1526]: time="2025-09-09T21:30:51.815101817Z" level=error msg="Failed to destroy network for sandbox \"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.819626 containerd[1526]: time="2025-09-09T21:30:51.819507832Z" level=error msg="Failed to destroy network for sandbox \"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.821092 containerd[1526]: time="2025-09-09T21:30:51.821054588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-ncwmj,Uid:84004ccb-e3a6-4853-9973-88e70645f76b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.821663 kubelet[2677]: E0909 21:30:51.821241 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.821663 kubelet[2677]: E0909 21:30:51.821301 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" Sep 9 21:30:51.821663 kubelet[2677]: E0909 21:30:51.821320 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" Sep 9 21:30:51.821785 kubelet[2677]: E0909 21:30:51.821363 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf8986b4f-ncwmj_calico-apiserver(84004ccb-e3a6-4853-9973-88e70645f76b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf8986b4f-ncwmj_calico-apiserver(84004ccb-e3a6-4853-9973-88e70645f76b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b714ca60e9060efe36fc9d08d18d04fadcbc8b5cb0ba83dc1af4f0c56f268a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" podUID="84004ccb-e3a6-4853-9973-88e70645f76b" Sep 9 21:30:51.824306 containerd[1526]: time="2025-09-09T21:30:51.824255805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxf6p,Uid:191f0b65-63b1-4ca5-96ba-7eb89927ac9c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.824486 kubelet[2677]: E0909 21:30:51.824453 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.824535 kubelet[2677]: E0909 21:30:51.824497 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pxf6p" Sep 9 21:30:51.824535 kubelet[2677]: E0909 21:30:51.824527 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pxf6p" Sep 9 21:30:51.824783 kubelet[2677]: E0909 21:30:51.824568 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pxf6p_kube-system(191f0b65-63b1-4ca5-96ba-7eb89927ac9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pxf6p_kube-system(191f0b65-63b1-4ca5-96ba-7eb89927ac9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a815bfca7d9d8bcefbd419b30d1472e8482fd29a853a6f7ca6a55d1576cd08c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pxf6p" podUID="191f0b65-63b1-4ca5-96ba-7eb89927ac9c" Sep 9 21:30:51.830481 containerd[1526]: time="2025-09-09T21:30:51.830428907Z" level=error msg="Failed to destroy network for sandbox \"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.831379 containerd[1526]: time="2025-09-09T21:30:51.831340237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8754q,Uid:6ecd122a-2c80-485e-865e-2677d804d4cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.832232 containerd[1526]: time="2025-09-09T21:30:51.831435299Z" level=error msg="Failed to destroy network for sandbox \"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.832615 kubelet[2677]: E0909 21:30:51.832496 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.832615 kubelet[2677]: E0909 21:30:51.832541 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.832615 kubelet[2677]: E0909 21:30:51.832563 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-8754q" Sep 9 21:30:51.832740 kubelet[2677]: E0909 21:30:51.832601 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-8754q_calico-system(6ecd122a-2c80-485e-865e-2677d804d4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-8754q_calico-system(6ecd122a-2c80-485e-865e-2677d804d4cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4badf6cbc2a7bb5309b68021f183ed7421bf5d2ce0ff2221b39949cdf69052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-8754q" podUID="6ecd122a-2c80-485e-865e-2677d804d4cd" Sep 9 21:30:51.833306 containerd[1526]: time="2025-09-09T21:30:51.833247556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lb7j,Uid:7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.833582 kubelet[2677]: E0909 21:30:51.833527 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.833630 kubelet[2677]: E0909 21:30:51.833584 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6lb7j" Sep 9 21:30:51.833630 kubelet[2677]: E0909 21:30:51.833600 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6lb7j" Sep 9 21:30:51.833797 kubelet[2677]: E0909 21:30:51.833654 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6lb7j_kube-system(7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6lb7j_kube-system(7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16fe215dce173336941c0055c70e7860cd5197e857dbd26c030887ba313bf5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6lb7j" podUID="7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c" Sep 9 21:30:51.839753 containerd[1526]: time="2025-09-09T21:30:51.839697962Z" level=error msg="Failed to destroy network for sandbox \"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.840619 containerd[1526]: time="2025-09-09T21:30:51.840581886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccc44944c-dfhqf,Uid:93e05870-9442-4c95-8771-d0f79c0248cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.841106 kubelet[2677]: E0909 21:30:51.840745 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 21:30:51.841106 kubelet[2677]: E0909 21:30:51.840785 2677 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccc44944c-dfhqf" Sep 9 21:30:51.841106 kubelet[2677]: E0909 21:30:51.840801 2677 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccc44944c-dfhqf" Sep 9 21:30:51.841205 kubelet[2677]: E0909 21:30:51.840841 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ccc44944c-dfhqf_calico-system(93e05870-9442-4c95-8771-d0f79c0248cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ccc44944c-dfhqf_calico-system(93e05870-9442-4c95-8771-d0f79c0248cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d2e72235eaf9b8b4f8eef4803ad71830babb5f82b8324d12ec5586d2368e0fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ccc44944c-dfhqf" podUID="93e05870-9442-4c95-8771-d0f79c0248cf" Sep 9 21:30:52.648293 systemd[1]: run-netns-cni\x2d7aea1b18\x2d93bb\x2da9d6\x2d313a\x2d5f2ea66e0916.mount: Deactivated successfully. Sep 9 21:30:52.648387 systemd[1]: run-netns-cni\x2dab43fb09\x2d35fc\x2d0c02\x2dd614\x2de455206ac91c.mount: Deactivated successfully. Sep 9 21:30:52.648446 systemd[1]: run-netns-cni\x2d4ae472da\x2db93b\x2d69a9\x2df43e\x2dd327de13fab6.mount: Deactivated successfully. Sep 9 21:30:52.648491 systemd[1]: run-netns-cni\x2da7c80b11\x2d2bcd\x2da0aa\x2d04b7\x2d937fb1cb42e8.mount: Deactivated successfully. Sep 9 21:30:55.721178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793572019.mount: Deactivated successfully. Sep 9 21:30:55.946666 containerd[1526]: time="2025-09-09T21:30:55.946618970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:55.947606 containerd[1526]: time="2025-09-09T21:30:55.947455614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 21:30:55.948324 containerd[1526]: time="2025-09-09T21:30:55.948257892Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:55.950747 containerd[1526]: time="2025-09-09T21:30:55.950166627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:55.950747 containerd[1526]: time="2025-09-09T21:30:55.950631198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.351148165s" Sep 9 21:30:55.950747 containerd[1526]: time="2025-09-09T21:30:55.950660284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 21:30:55.975918 containerd[1526]: time="2025-09-09T21:30:55.975827951Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 21:30:55.984005 containerd[1526]: time="2025-09-09T21:30:55.983671573Z" level=info msg="Container aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:55.992682 containerd[1526]: time="2025-09-09T21:30:55.992639896Z" level=info msg="CreateContainer within sandbox \"e0907cb96749cd83955707ed4660d3d386324f9362d4db6ffb1e03fbe0973f96\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\"" Sep 9 21:30:55.994953 containerd[1526]: time="2025-09-09T21:30:55.994918824Z" level=info msg="StartContainer for \"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\"" Sep 9 21:30:55.996671 containerd[1526]: time="2025-09-09T21:30:55.996639162Z" level=info msg="connecting to shim aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3" address="unix:///run/containerd/s/04ed0b479e81060786c1bbafd74a4658b77863883179fa86107945e3695e479f" protocol=ttrpc version=3 Sep 9 21:30:56.017444 systemd[1]: Started cri-containerd-aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3.scope - libcontainer container aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3. Sep 9 21:30:56.177724 containerd[1526]: time="2025-09-09T21:30:56.177683243Z" level=info msg="StartContainer for \"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\" returns successfully" Sep 9 21:30:56.182734 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 21:30:56.182815 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 21:30:56.400927 kubelet[2677]: I0909 21:30:56.400891 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-backend-key-pair\") pod \"93e05870-9442-4c95-8771-d0f79c0248cf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " Sep 9 21:30:56.401305 kubelet[2677]: I0909 21:30:56.400939 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbt9x\" (UniqueName: \"kubernetes.io/projected/93e05870-9442-4c95-8771-d0f79c0248cf-kube-api-access-nbt9x\") pod \"93e05870-9442-4c95-8771-d0f79c0248cf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " Sep 9 21:30:56.401305 kubelet[2677]: I0909 21:30:56.400959 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-ca-bundle\") pod \"93e05870-9442-4c95-8771-d0f79c0248cf\" (UID: \"93e05870-9442-4c95-8771-d0f79c0248cf\") " Sep 9 21:30:56.416890 kubelet[2677]: I0909 21:30:56.416841 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "93e05870-9442-4c95-8771-d0f79c0248cf" (UID: "93e05870-9442-4c95-8771-d0f79c0248cf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 21:30:56.417111 kubelet[2677]: I0909 21:30:56.417087 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e05870-9442-4c95-8771-d0f79c0248cf-kube-api-access-nbt9x" (OuterVolumeSpecName: "kube-api-access-nbt9x") pod "93e05870-9442-4c95-8771-d0f79c0248cf" (UID: "93e05870-9442-4c95-8771-d0f79c0248cf"). InnerVolumeSpecName "kube-api-access-nbt9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 21:30:56.423407 kubelet[2677]: I0909 21:30:56.423364 2677 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "93e05870-9442-4c95-8771-d0f79c0248cf" (UID: "93e05870-9442-4c95-8771-d0f79c0248cf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 21:30:56.501951 kubelet[2677]: I0909 21:30:56.501908 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 21:30:56.501951 kubelet[2677]: I0909 21:30:56.501943 2677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nbt9x\" (UniqueName: \"kubernetes.io/projected/93e05870-9442-4c95-8771-d0f79c0248cf-kube-api-access-nbt9x\") on node \"localhost\" DevicePath \"\"" Sep 9 21:30:56.501951 kubelet[2677]: I0909 21:30:56.501955 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e05870-9442-4c95-8771-d0f79c0248cf-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 21:30:56.620864 systemd[1]: Removed slice kubepods-besteffort-pod93e05870_9442_4c95_8771_d0f79c0248cf.slice - libcontainer container kubepods-besteffort-pod93e05870_9442_4c95_8771_d0f79c0248cf.slice. Sep 9 21:30:56.642996 kubelet[2677]: I0909 21:30:56.642941 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h72rw" podStartSLOduration=1.158401589 podStartE2EDuration="12.642819496s" podCreationTimestamp="2025-09-09 21:30:44 +0000 UTC" firstStartedPulling="2025-09-09 21:30:44.46956247 +0000 UTC m=+21.094499554" lastFinishedPulling="2025-09-09 21:30:55.953980377 +0000 UTC m=+32.578917461" observedRunningTime="2025-09-09 21:30:56.634089123 +0000 UTC m=+33.259026207" watchObservedRunningTime="2025-09-09 21:30:56.642819496 +0000 UTC m=+33.267756580" Sep 9 21:30:56.679130 systemd[1]: Created slice kubepods-besteffort-podaf8917e0_24ae_4a0f_920c_b829a8b74746.slice - libcontainer container kubepods-besteffort-podaf8917e0_24ae_4a0f_920c_b829a8b74746.slice. Sep 9 21:30:56.703663 kubelet[2677]: I0909 21:30:56.703615 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af8917e0-24ae-4a0f-920c-b829a8b74746-whisker-backend-key-pair\") pod \"whisker-84c4f6c54-vkkdp\" (UID: \"af8917e0-24ae-4a0f-920c-b829a8b74746\") " pod="calico-system/whisker-84c4f6c54-vkkdp" Sep 9 21:30:56.704247 kubelet[2677]: I0909 21:30:56.704030 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qwq\" (UniqueName: \"kubernetes.io/projected/af8917e0-24ae-4a0f-920c-b829a8b74746-kube-api-access-l4qwq\") pod \"whisker-84c4f6c54-vkkdp\" (UID: \"af8917e0-24ae-4a0f-920c-b829a8b74746\") " pod="calico-system/whisker-84c4f6c54-vkkdp" Sep 9 21:30:56.704247 kubelet[2677]: I0909 21:30:56.704068 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8917e0-24ae-4a0f-920c-b829a8b74746-whisker-ca-bundle\") pod \"whisker-84c4f6c54-vkkdp\" (UID: \"af8917e0-24ae-4a0f-920c-b829a8b74746\") " pod="calico-system/whisker-84c4f6c54-vkkdp" Sep 9 21:30:56.721971 systemd[1]: var-lib-kubelet-pods-93e05870\x2d9442\x2d4c95\x2d8771\x2dd0f79c0248cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbt9x.mount: Deactivated successfully. Sep 9 21:30:56.722060 systemd[1]: var-lib-kubelet-pods-93e05870\x2d9442\x2d4c95\x2d8771\x2dd0f79c0248cf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 21:30:56.982678 containerd[1526]: time="2025-09-09T21:30:56.982573603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c4f6c54-vkkdp,Uid:af8917e0-24ae-4a0f-920c-b829a8b74746,Namespace:calico-system,Attempt:0,}" Sep 9 21:30:57.126373 systemd-networkd[1435]: cali081f795bcd1: Link UP Sep 9 21:30:57.126721 systemd-networkd[1435]: cali081f795bcd1: Gained carrier Sep 9 21:30:57.139138 containerd[1526]: 2025-09-09 21:30:57.004 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 21:30:57.139138 containerd[1526]: 2025-09-09 21:30:57.032 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84c4f6c54--vkkdp-eth0 whisker-84c4f6c54- calico-system af8917e0-24ae-4a0f-920c-b829a8b74746 900 0 2025-09-09 21:30:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84c4f6c54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84c4f6c54-vkkdp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali081f795bcd1 [] [] }} ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-" Sep 9 21:30:57.139138 containerd[1526]: 2025-09-09 21:30:57.032 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139138 containerd[1526]: 2025-09-09 21:30:57.083 [INFO][3835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" HandleID="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Workload="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.083 [INFO][3835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" HandleID="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Workload="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003887b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84c4f6c54-vkkdp", "timestamp":"2025-09-09 21:30:57.082991393 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.083 [INFO][3835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.083 [INFO][3835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.083 [INFO][3835] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.092 [INFO][3835] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" host="localhost" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.097 [INFO][3835] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.101 [INFO][3835] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.102 [INFO][3835] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.104 [INFO][3835] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:30:57.139360 containerd[1526]: 2025-09-09 21:30:57.105 [INFO][3835] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" host="localhost" Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.107 [INFO][3835] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.110 [INFO][3835] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" host="localhost" Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.114 [INFO][3835] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" host="localhost" Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.114 [INFO][3835] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" host="localhost" Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.114 [INFO][3835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:30:57.139572 containerd[1526]: 2025-09-09 21:30:57.114 [INFO][3835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" HandleID="k8s-pod-network.893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Workload="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139683 containerd[1526]: 2025-09-09 21:30:57.117 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84c4f6c54--vkkdp-eth0", GenerateName:"whisker-84c4f6c54-", Namespace:"calico-system", SelfLink:"", UID:"af8917e0-24ae-4a0f-920c-b829a8b74746", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c4f6c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84c4f6c54-vkkdp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali081f795bcd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:30:57.139683 containerd[1526]: 2025-09-09 21:30:57.117 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139751 containerd[1526]: 2025-09-09 21:30:57.117 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali081f795bcd1 ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139751 containerd[1526]: 2025-09-09 21:30:57.127 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.139791 containerd[1526]: 2025-09-09 21:30:57.127 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84c4f6c54--vkkdp-eth0", GenerateName:"whisker-84c4f6c54-", Namespace:"calico-system", SelfLink:"", UID:"af8917e0-24ae-4a0f-920c-b829a8b74746", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c4f6c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba", Pod:"whisker-84c4f6c54-vkkdp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali081f795bcd1", MAC:"9e:39:d5:7b:38:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:30:57.139833 containerd[1526]: 2025-09-09 21:30:57.137 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" Namespace="calico-system" Pod="whisker-84c4f6c54-vkkdp" WorkloadEndpoint="localhost-k8s-whisker--84c4f6c54--vkkdp-eth0" Sep 9 21:30:57.196371 containerd[1526]: time="2025-09-09T21:30:57.196329336Z" level=info msg="connecting to shim 893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba" address="unix:///run/containerd/s/3f71df353d22fc8c751ada349384e8517c9765b464a3d8b5dbb56c3960aa0da0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:30:57.221401 systemd[1]: Started cri-containerd-893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba.scope - libcontainer container 893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba. Sep 9 21:30:57.231202 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:30:57.248883 containerd[1526]: time="2025-09-09T21:30:57.248792480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c4f6c54-vkkdp,Uid:af8917e0-24ae-4a0f-920c-b829a8b74746,Namespace:calico-system,Attempt:0,} returns sandbox id \"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba\"" Sep 9 21:30:57.251106 containerd[1526]: time="2025-09-09T21:30:57.251079618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 21:30:57.506061 kubelet[2677]: I0909 21:30:57.505964 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e05870-9442-4c95-8771-d0f79c0248cf" path="/var/lib/kubelet/pods/93e05870-9442-4c95-8771-d0f79c0248cf/volumes" Sep 9 21:30:57.751518 containerd[1526]: time="2025-09-09T21:30:57.751480268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\" id:\"6a5d8ad1d5ea3ae0b4da2a5a497cc8398218a2be4aad4de2ccf860ae88d2c635\" pid:4010 exit_status:1 exited_at:{seconds:1757453457 nanos:750695485}" Sep 9 21:30:58.247736 systemd-networkd[1435]: cali081f795bcd1: Gained IPv6LL Sep 9 21:30:58.255427 containerd[1526]: time="2025-09-09T21:30:58.255388162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:58.256184 containerd[1526]: time="2025-09-09T21:30:58.256091526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 21:30:58.256784 containerd[1526]: time="2025-09-09T21:30:58.256754162Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:58.259577 containerd[1526]: time="2025-09-09T21:30:58.259511689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:30:58.260285 containerd[1526]: time="2025-09-09T21:30:58.260035781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.008924358s" Sep 9 21:30:58.260285 containerd[1526]: time="2025-09-09T21:30:58.260067827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 21:30:58.264301 containerd[1526]: time="2025-09-09T21:30:58.264246724Z" level=info msg="CreateContainer within sandbox \"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 21:30:58.272072 containerd[1526]: time="2025-09-09T21:30:58.272028296Z" level=info msg="Container 4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:30:58.275439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726683082.mount: Deactivated successfully. Sep 9 21:30:58.278345 containerd[1526]: time="2025-09-09T21:30:58.278314085Z" level=info msg="CreateContainer within sandbox \"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004\"" Sep 9 21:30:58.279016 containerd[1526]: time="2025-09-09T21:30:58.278907470Z" level=info msg="StartContainer for \"4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004\"" Sep 9 21:30:58.280441 containerd[1526]: time="2025-09-09T21:30:58.280415696Z" level=info msg="connecting to shim 4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004" address="unix:///run/containerd/s/3f71df353d22fc8c751ada349384e8517c9765b464a3d8b5dbb56c3960aa0da0" protocol=ttrpc version=3 Sep 9 21:30:58.307412 systemd[1]: Started cri-containerd-4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004.scope - libcontainer container 4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004. Sep 9 21:30:58.339040 containerd[1526]: time="2025-09-09T21:30:58.339009870Z" level=info msg="StartContainer for \"4236c2c02d2623bfa40cb471ceaf5fb9200bd1471484517f66f752f56144f004\" returns successfully" Sep 9 21:30:58.344249 containerd[1526]: time="2025-09-09T21:30:58.343918336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 21:30:58.713991 containerd[1526]: time="2025-09-09T21:30:58.713901272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\" id:\"dda8bd9e2be67caf0f2a7b206b595917d22ad1fc88ab9b362fcd9692b49559cd\" pid:4082 exit_status:1 exited_at:{seconds:1757453458 nanos:712230897}" Sep 9 21:31:00.070311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209633142.mount: Deactivated successfully. Sep 9 21:31:00.085049 containerd[1526]: time="2025-09-09T21:31:00.084998193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:00.085527 containerd[1526]: time="2025-09-09T21:31:00.085492595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 21:31:00.086370 containerd[1526]: time="2025-09-09T21:31:00.086338734Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:00.088499 containerd[1526]: time="2025-09-09T21:31:00.088473246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:00.089395 containerd[1526]: time="2025-09-09T21:31:00.089045261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.745082157s" Sep 9 21:31:00.089395 containerd[1526]: time="2025-09-09T21:31:00.089076666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 21:31:00.095499 containerd[1526]: time="2025-09-09T21:31:00.095470520Z" level=info msg="CreateContainer within sandbox \"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 21:31:00.101120 containerd[1526]: time="2025-09-09T21:31:00.101074405Z" level=info msg="Container 3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:00.109128 containerd[1526]: time="2025-09-09T21:31:00.109072884Z" level=info msg="CreateContainer within sandbox \"893e2555e51f385a2114c59ce369ee3d7369dc8717591380a288f5c4b434f7ba\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787\"" Sep 9 21:31:00.110352 containerd[1526]: time="2025-09-09T21:31:00.109552403Z" level=info msg="StartContainer for \"3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787\"" Sep 9 21:31:00.111568 containerd[1526]: time="2025-09-09T21:31:00.111532690Z" level=info msg="connecting to shim 3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787" address="unix:///run/containerd/s/3f71df353d22fc8c751ada349384e8517c9765b464a3d8b5dbb56c3960aa0da0" protocol=ttrpc version=3 Sep 9 21:31:00.141440 systemd[1]: Started cri-containerd-3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787.scope - libcontainer container 3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787. Sep 9 21:31:00.204718 containerd[1526]: time="2025-09-09T21:31:00.204680933Z" level=info msg="StartContainer for \"3dad4b50309ee3b1f18c84e0937f9675e9b3909f26283c9ee2411a20ac2d4787\" returns successfully" Sep 9 21:31:01.396704 kubelet[2677]: I0909 21:31:01.396306 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 21:31:01.397867 kubelet[2677]: E0909 21:31:01.397739 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:01.411729 kubelet[2677]: I0909 21:31:01.411554 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84c4f6c54-vkkdp" podStartSLOduration=2.570849942 podStartE2EDuration="5.411498805s" podCreationTimestamp="2025-09-09 21:30:56 +0000 UTC" firstStartedPulling="2025-09-09 21:30:57.250093878 +0000 UTC m=+33.875030962" lastFinishedPulling="2025-09-09 21:31:00.090742741 +0000 UTC m=+36.715679825" observedRunningTime="2025-09-09 21:31:00.64532017 +0000 UTC m=+37.270257294" watchObservedRunningTime="2025-09-09 21:31:01.411498805 +0000 UTC m=+38.036435889" Sep 9 21:31:01.633879 kubelet[2677]: E0909 21:31:01.633838 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:02.060062 systemd-networkd[1435]: vxlan.calico: Link UP Sep 9 21:31:02.060068 systemd-networkd[1435]: vxlan.calico: Gained carrier Sep 9 21:31:02.487502 containerd[1526]: time="2025-09-09T21:31:02.487457198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff587889-2m72r,Uid:c1d063b2-4e7c-4387-88ea-97ea5864199d,Namespace:calico-system,Attempt:0,}" Sep 9 21:31:02.658637 systemd-networkd[1435]: calib99c6c92889: Link UP Sep 9 21:31:02.659307 systemd-networkd[1435]: calib99c6c92889: Gained carrier Sep 9 21:31:02.671155 containerd[1526]: 2025-09-09 21:31:02.584 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0 calico-kube-controllers-bff587889- calico-system c1d063b2-4e7c-4387-88ea-97ea5864199d 821 0 2025-09-09 21:30:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bff587889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bff587889-2m72r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib99c6c92889 [] [] }} ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-" Sep 9 21:31:02.671155 containerd[1526]: 2025-09-09 21:31:02.584 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.671155 containerd[1526]: 2025-09-09 21:31:02.613 [INFO][4358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" HandleID="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Workload="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.613 [INFO][4358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" HandleID="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Workload="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bff587889-2m72r", "timestamp":"2025-09-09 21:31:02.613347896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.613 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.613 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.613 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.626 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" host="localhost" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.631 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.635 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.637 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.638 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:02.671737 containerd[1526]: 2025-09-09 21:31:02.639 [INFO][4358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" host="localhost" Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.640 [INFO][4358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809 Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.649 [INFO][4358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" host="localhost" Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.653 [INFO][4358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" host="localhost" Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.653 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" host="localhost" Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.653 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:02.671963 containerd[1526]: 2025-09-09 21:31:02.653 [INFO][4358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" HandleID="k8s-pod-network.bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Workload="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.672074 containerd[1526]: 2025-09-09 21:31:02.656 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0", GenerateName:"calico-kube-controllers-bff587889-", Namespace:"calico-system", SelfLink:"", UID:"c1d063b2-4e7c-4387-88ea-97ea5864199d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff587889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bff587889-2m72r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib99c6c92889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:02.672165 containerd[1526]: 2025-09-09 21:31:02.656 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.672165 containerd[1526]: 2025-09-09 21:31:02.656 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib99c6c92889 ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.672165 containerd[1526]: 2025-09-09 21:31:02.659 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.672236 containerd[1526]: 2025-09-09 21:31:02.660 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0", GenerateName:"calico-kube-controllers-bff587889-", Namespace:"calico-system", SelfLink:"", UID:"c1d063b2-4e7c-4387-88ea-97ea5864199d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bff587889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809", Pod:"calico-kube-controllers-bff587889-2m72r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib99c6c92889", MAC:"4a:8a:eb:61:0d:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:02.672307 containerd[1526]: 2025-09-09 21:31:02.668 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" Namespace="calico-system" Pod="calico-kube-controllers-bff587889-2m72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bff587889--2m72r-eth0" Sep 9 21:31:02.694960 containerd[1526]: time="2025-09-09T21:31:02.694849478Z" level=info msg="connecting to shim bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809" address="unix:///run/containerd/s/cb8c907404188c11348ed019a55e55558b34ab7a5b09f0eb68470aa4886b1317" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:02.725495 systemd[1]: Started cri-containerd-bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809.scope - libcontainer container bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809. Sep 9 21:31:02.738232 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:02.759218 containerd[1526]: time="2025-09-09T21:31:02.759173321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bff587889-2m72r,Uid:c1d063b2-4e7c-4387-88ea-97ea5864199d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809\"" Sep 9 21:31:02.770977 containerd[1526]: time="2025-09-09T21:31:02.770945184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 21:31:02.857933 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:49036.service - OpenSSH per-connection server daemon (10.0.0.1:49036). Sep 9 21:31:02.920042 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 49036 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:02.921402 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:02.925117 systemd-logind[1504]: New session 8 of user core. Sep 9 21:31:02.930431 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:31:03.103505 sshd[4426]: Connection closed by 10.0.0.1 port 49036 Sep 9 21:31:03.103749 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:03.107361 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:49036.service: Deactivated successfully. Sep 9 21:31:03.109329 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:31:03.111874 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:31:03.113176 systemd-logind[1504]: Removed session 8. Sep 9 21:31:03.486920 containerd[1526]: time="2025-09-09T21:31:03.486885733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-5v74p,Uid:050ffd45-2c7e-4d21-bbce-94d9d12594fb,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:31:03.599857 systemd-networkd[1435]: cali108a2b1e5dd: Link UP Sep 9 21:31:03.600460 systemd-networkd[1435]: cali108a2b1e5dd: Gained carrier Sep 9 21:31:03.615039 containerd[1526]: 2025-09-09 21:31:03.523 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0 calico-apiserver-7cf8986b4f- calico-apiserver 050ffd45-2c7e-4d21-bbce-94d9d12594fb 827 0 2025-09-09 21:30:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf8986b4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cf8986b4f-5v74p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali108a2b1e5dd [] [] }} ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-" Sep 9 21:31:03.615039 containerd[1526]: 2025-09-09 21:31:03.524 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.615039 containerd[1526]: 2025-09-09 21:31:03.551 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" HandleID="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.551 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" HandleID="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cf8986b4f-5v74p", "timestamp":"2025-09-09 21:31:03.551788369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.552 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.552 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.552 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.563 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" host="localhost" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.568 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.576 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.578 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.580 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:03.615520 containerd[1526]: 2025-09-09 21:31:03.580 [INFO][4454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" host="localhost" Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.582 [INFO][4454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56 Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.586 [INFO][4454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" host="localhost" Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.592 [INFO][4454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" host="localhost" Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.592 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" host="localhost" Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.592 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:03.616783 containerd[1526]: 2025-09-09 21:31:03.592 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" HandleID="k8s-pod-network.decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.616895 containerd[1526]: 2025-09-09 21:31:03.596 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0", GenerateName:"calico-apiserver-7cf8986b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"050ffd45-2c7e-4d21-bbce-94d9d12594fb", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf8986b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cf8986b4f-5v74p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali108a2b1e5dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:03.616949 containerd[1526]: 2025-09-09 21:31:03.596 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.616949 containerd[1526]: 2025-09-09 21:31:03.596 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali108a2b1e5dd ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.616949 containerd[1526]: 2025-09-09 21:31:03.599 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.617006 containerd[1526]: 2025-09-09 21:31:03.600 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0", GenerateName:"calico-apiserver-7cf8986b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"050ffd45-2c7e-4d21-bbce-94d9d12594fb", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf8986b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56", Pod:"calico-apiserver-7cf8986b4f-5v74p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali108a2b1e5dd", MAC:"52:c2:bd:36:2c:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:03.617051 containerd[1526]: 2025-09-09 21:31:03.611 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-5v74p" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--5v74p-eth0" Sep 9 21:31:03.638226 containerd[1526]: time="2025-09-09T21:31:03.638192437Z" level=info msg="connecting to shim decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56" address="unix:///run/containerd/s/0bd2ccbeedd0c71e5feb390083aca61b73211f0d7fc5cf6b14f1d67b14940575" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:03.670464 systemd[1]: Started cri-containerd-decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56.scope - libcontainer container decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56. Sep 9 21:31:03.680864 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:03.688505 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Sep 9 21:31:03.700146 containerd[1526]: time="2025-09-09T21:31:03.700108104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-5v74p,Uid:050ffd45-2c7e-4d21-bbce-94d9d12594fb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56\"" Sep 9 21:31:03.752374 systemd-networkd[1435]: calib99c6c92889: Gained IPv6LL Sep 9 21:31:05.166198 containerd[1526]: time="2025-09-09T21:31:05.166151034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:05.166734 containerd[1526]: time="2025-09-09T21:31:05.166703272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 21:31:05.167994 containerd[1526]: time="2025-09-09T21:31:05.167943288Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:05.169759 containerd[1526]: time="2025-09-09T21:31:05.169724141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:05.170473 containerd[1526]: time="2025-09-09T21:31:05.170443243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.399470696s" Sep 9 21:31:05.170512 containerd[1526]: time="2025-09-09T21:31:05.170477288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 21:31:05.171396 containerd[1526]: time="2025-09-09T21:31:05.171346412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 21:31:05.181210 containerd[1526]: time="2025-09-09T21:31:05.181177968Z" level=info msg="CreateContainer within sandbox \"bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 21:31:05.189705 containerd[1526]: time="2025-09-09T21:31:05.189434061Z" level=info msg="Container 85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:05.212404 containerd[1526]: time="2025-09-09T21:31:05.212361757Z" level=info msg="CreateContainer within sandbox \"bef105a088ca911c8f5e7af38260772938e684412eb5b53a89dd7be907792809\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\"" Sep 9 21:31:05.213086 containerd[1526]: time="2025-09-09T21:31:05.213062817Z" level=info msg="StartContainer for \"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\"" Sep 9 21:31:05.214262 containerd[1526]: time="2025-09-09T21:31:05.214222502Z" level=info msg="connecting to shim 85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95" address="unix:///run/containerd/s/cb8c907404188c11348ed019a55e55558b34ab7a5b09f0eb68470aa4886b1317" protocol=ttrpc version=3 Sep 9 21:31:05.240447 systemd[1]: Started cri-containerd-85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95.scope - libcontainer container 85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95. Sep 9 21:31:05.282695 containerd[1526]: time="2025-09-09T21:31:05.282640659Z" level=info msg="StartContainer for \"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\" returns successfully" Sep 9 21:31:05.487335 kubelet[2677]: E0909 21:31:05.487210 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:05.487682 containerd[1526]: time="2025-09-09T21:31:05.487471151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8754q,Uid:6ecd122a-2c80-485e-865e-2677d804d4cd,Namespace:calico-system,Attempt:0,}" Sep 9 21:31:05.487682 containerd[1526]: time="2025-09-09T21:31:05.487551323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lb7j,Uid:7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c,Namespace:kube-system,Attempt:0,}" Sep 9 21:31:05.598076 systemd-networkd[1435]: calia4927d83607: Link UP Sep 9 21:31:05.598707 systemd-networkd[1435]: calia4927d83607: Gained carrier Sep 9 21:31:05.607534 systemd-networkd[1435]: cali108a2b1e5dd: Gained IPv6LL Sep 9 21:31:05.612127 containerd[1526]: 2025-09-09 21:31:05.531 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0 coredns-674b8bbfcf- kube-system 7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c 828 0 2025-09-09 21:30:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6lb7j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia4927d83607 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-" Sep 9 21:31:05.612127 containerd[1526]: 2025-09-09 21:31:05.531 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.612127 containerd[1526]: 2025-09-09 21:31:05.555 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" HandleID="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Workload="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.555 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" HandleID="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Workload="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6lb7j", "timestamp":"2025-09-09 21:31:05.55533511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.555 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.555 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.555 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.565 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" host="localhost" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.571 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.575 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.576 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.578 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:05.612402 containerd[1526]: 2025-09-09 21:31:05.578 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" host="localhost" Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.579 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8 Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.583 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" host="localhost" Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" host="localhost" Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" host="localhost" Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:05.612778 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" HandleID="k8s-pod-network.9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Workload="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.612969 containerd[1526]: 2025-09-09 21:31:05.591 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6lb7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4927d83607", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:05.613085 containerd[1526]: 2025-09-09 21:31:05.591 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.613085 containerd[1526]: 2025-09-09 21:31:05.591 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4927d83607 ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.613085 containerd[1526]: 2025-09-09 21:31:05.600 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.613201 containerd[1526]: 2025-09-09 21:31:05.600 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8", Pod:"coredns-674b8bbfcf-6lb7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4927d83607", MAC:"6a:22:be:ed:20:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:05.613201 containerd[1526]: 2025-09-09 21:31:05.610 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-6lb7j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6lb7j-eth0" Sep 9 21:31:05.646284 containerd[1526]: time="2025-09-09T21:31:05.646169452Z" level=info msg="connecting to shim 9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8" address="unix:///run/containerd/s/8a4dd1470b2ba3370af49570d6372047de7f694f082c52f17bc7a8759e7a0755" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:05.661717 kubelet[2677]: I0909 21:31:05.661650 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bff587889-2m72r" podStartSLOduration=19.261094678 podStartE2EDuration="21.661633128s" podCreationTimestamp="2025-09-09 21:30:44 +0000 UTC" firstStartedPulling="2025-09-09 21:31:02.770709868 +0000 UTC m=+39.395646952" lastFinishedPulling="2025-09-09 21:31:05.171248318 +0000 UTC m=+41.796185402" observedRunningTime="2025-09-09 21:31:05.65967745 +0000 UTC m=+42.284614494" watchObservedRunningTime="2025-09-09 21:31:05.661633128 +0000 UTC m=+42.286570212" Sep 9 21:31:05.688474 systemd[1]: Started cri-containerd-9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8.scope - libcontainer container 9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8. Sep 9 21:31:05.692226 containerd[1526]: time="2025-09-09T21:31:05.692173026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\" id:\"11fc20642acbc1e2f455ef1e331c0a710358df25bdb3c75c4892d0daa24cc3be\" pid:4661 exited_at:{seconds:1757453465 nanos:691783450}" Sep 9 21:31:05.708294 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:05.724856 systemd-networkd[1435]: cali8274888bfe7: Link UP Sep 9 21:31:05.727433 systemd-networkd[1435]: cali8274888bfe7: Gained carrier Sep 9 21:31:05.743093 containerd[1526]: time="2025-09-09T21:31:05.742976041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6lb7j,Uid:7b9ca5bc-0aa5-4fb1-b460-96d3f44a6e0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8\"" Sep 9 21:31:05.745091 kubelet[2677]: E0909 21:31:05.744736 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.529 [INFO][4568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--8754q-eth0 goldmane-54d579b49d- calico-system 6ecd122a-2c80-485e-865e-2677d804d4cd 829 0 2025-09-09 21:30:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-8754q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8274888bfe7 [] [] }} ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.529 [INFO][4568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.559 [INFO][4597] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" HandleID="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Workload="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.559 [INFO][4597] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" HandleID="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Workload="localhost-k8s-goldmane--54d579b49d--8754q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000119640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-8754q", "timestamp":"2025-09-09 21:31:05.559708131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.559 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.589 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.666 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.674 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.686 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.689 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.695 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.695 [INFO][4597] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.697 [INFO][4597] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614 Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.704 [INFO][4597] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.713 [INFO][4597] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.713 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" host="localhost" Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.713 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:05.748485 containerd[1526]: 2025-09-09 21:31:05.713 [INFO][4597] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" HandleID="k8s-pod-network.54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Workload="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.718 [INFO][4568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--8754q-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6ecd122a-2c80-485e-865e-2677d804d4cd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-8754q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8274888bfe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.718 [INFO][4568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.719 [INFO][4568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8274888bfe7 ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.725 [INFO][4568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.728 [INFO][4568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--8754q-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6ecd122a-2c80-485e-865e-2677d804d4cd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614", Pod:"goldmane-54d579b49d-8754q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8274888bfe7", MAC:"a6:92:8d:c7:48:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:05.749067 containerd[1526]: 2025-09-09 21:31:05.743 [INFO][4568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" Namespace="calico-system" Pod="goldmane-54d579b49d-8754q" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--8754q-eth0" Sep 9 21:31:05.751631 containerd[1526]: time="2025-09-09T21:31:05.751582624Z" level=info msg="CreateContainer within sandbox \"9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:31:05.763838 containerd[1526]: time="2025-09-09T21:31:05.763726508Z" level=info msg="Container 79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:05.767949 containerd[1526]: time="2025-09-09T21:31:05.767904062Z" level=info msg="connecting to shim 54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614" address="unix:///run/containerd/s/71de474f76a37cb4cc1d615e619fce222161bc332436614406afa32f255f6f0a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:05.772030 containerd[1526]: time="2025-09-09T21:31:05.771857263Z" level=info msg="CreateContainer within sandbox \"9f25c8c1784a0910c2b121f3e12fd59291a42979531186d51d1eef7ce32111e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a\"" Sep 9 21:31:05.773221 containerd[1526]: time="2025-09-09T21:31:05.772439986Z" level=info msg="StartContainer for \"79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a\"" Sep 9 21:31:05.776987 containerd[1526]: time="2025-09-09T21:31:05.776957588Z" level=info msg="connecting to shim 79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a" address="unix:///run/containerd/s/8a4dd1470b2ba3370af49570d6372047de7f694f082c52f17bc7a8759e7a0755" protocol=ttrpc version=3 Sep 9 21:31:05.799439 systemd[1]: Started cri-containerd-54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614.scope - libcontainer container 54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614. Sep 9 21:31:05.802730 systemd[1]: Started cri-containerd-79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a.scope - libcontainer container 79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a. Sep 9 21:31:05.813109 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:05.836257 containerd[1526]: time="2025-09-09T21:31:05.836211924Z" level=info msg="StartContainer for \"79958aa155917b24571c32ac1b27d283490f21a5628c861519198a05eafefe6a\" returns successfully" Sep 9 21:31:05.843355 containerd[1526]: time="2025-09-09T21:31:05.843319333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-8754q,Uid:6ecd122a-2c80-485e-865e-2677d804d4cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614\"" Sep 9 21:31:06.487854 containerd[1526]: time="2025-09-09T21:31:06.487810683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j2xp,Uid:0c443cb4-c154-434b-8539-5fef8fd1056c,Namespace:calico-system,Attempt:0,}" Sep 9 21:31:06.488252 kubelet[2677]: E0909 21:31:06.488186 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:06.488881 containerd[1526]: time="2025-09-09T21:31:06.488853027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-ncwmj,Uid:84004ccb-e3a6-4853-9973-88e70645f76b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 21:31:06.489089 containerd[1526]: time="2025-09-09T21:31:06.489061576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxf6p,Uid:191f0b65-63b1-4ca5-96ba-7eb89927ac9c,Namespace:kube-system,Attempt:0,}" Sep 9 21:31:06.622135 systemd-networkd[1435]: caliaa6ebc28b7c: Link UP Sep 9 21:31:06.623711 systemd-networkd[1435]: caliaa6ebc28b7c: Gained carrier Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.545 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6j2xp-eth0 csi-node-driver- calico-system 0c443cb4-c154-434b-8539-5fef8fd1056c 728 0 2025-09-09 21:30:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6j2xp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa6ebc28b7c [] [] }} ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.545 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.577 [INFO][4832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" HandleID="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Workload="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.577 [INFO][4832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" HandleID="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Workload="localhost-k8s-csi--node--driver--6j2xp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6j2xp", "timestamp":"2025-09-09 21:31:06.577296536 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.577 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.577 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.577 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.589 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.593 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.597 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.600 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.602 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.602 [INFO][4832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.604 [INFO][4832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.607 [INFO][4832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.613 [INFO][4832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.613 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" host="localhost" Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.614 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:06.640221 containerd[1526]: 2025-09-09 21:31:06.614 [INFO][4832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" HandleID="k8s-pod-network.0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Workload="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.618 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6j2xp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c443cb4-c154-434b-8539-5fef8fd1056c", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6j2xp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa6ebc28b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.618 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.618 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa6ebc28b7c ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.623 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.625 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6j2xp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c443cb4-c154-434b-8539-5fef8fd1056c", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b", Pod:"csi-node-driver-6j2xp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa6ebc28b7c", MAC:"0e:7c:dc:59:8a:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.640863 containerd[1526]: 2025-09-09 21:31:06.637 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" Namespace="calico-system" Pod="csi-node-driver-6j2xp" WorkloadEndpoint="localhost-k8s-csi--node--driver--6j2xp-eth0" Sep 9 21:31:06.652835 kubelet[2677]: E0909 21:31:06.652801 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:06.668099 kubelet[2677]: I0909 21:31:06.667944 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6lb7j" podStartSLOduration=36.667908505 podStartE2EDuration="36.667908505s" podCreationTimestamp="2025-09-09 21:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:31:06.667610584 +0000 UTC m=+43.292547668" watchObservedRunningTime="2025-09-09 21:31:06.667908505 +0000 UTC m=+43.292845589" Sep 9 21:31:06.670665 containerd[1526]: time="2025-09-09T21:31:06.670621560Z" level=info msg="connecting to shim 0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b" address="unix:///run/containerd/s/aea3a30c3fb1f0de8a7e7022c416722f9fe7ee648b31e4069f39d1084b6fda0a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:06.709429 systemd[1]: Started cri-containerd-0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b.scope - libcontainer container 0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b. Sep 9 21:31:06.729538 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:06.738053 systemd-networkd[1435]: cali65ec6992b81: Link UP Sep 9 21:31:06.738630 systemd-networkd[1435]: cali65ec6992b81: Gained carrier Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.549 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0 calico-apiserver-7cf8986b4f- calico-apiserver 84004ccb-e3a6-4853-9973-88e70645f76b 826 0 2025-09-09 21:30:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf8986b4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cf8986b4f-ncwmj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65ec6992b81 [] [] }} ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.549 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.579 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" HandleID="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.579 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" HandleID="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cf8986b4f-ncwmj", "timestamp":"2025-09-09 21:31:06.579045618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.579 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.614 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.614 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.691 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.701 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.715 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.717 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.719 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.719 [INFO][4838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.721 [INFO][4838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89 Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.726 [INFO][4838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" host="localhost" Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:06.758196 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" HandleID="k8s-pod-network.353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Workload="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.735 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0", GenerateName:"calico-apiserver-7cf8986b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"84004ccb-e3a6-4853-9973-88e70645f76b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf8986b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cf8986b4f-ncwmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65ec6992b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.735 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.735 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65ec6992b81 ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.738 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.738 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0", GenerateName:"calico-apiserver-7cf8986b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"84004ccb-e3a6-4853-9973-88e70645f76b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf8986b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89", Pod:"calico-apiserver-7cf8986b4f-ncwmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65ec6992b81", MAC:"2e:cd:e4:b3:5b:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.758743 containerd[1526]: 2025-09-09 21:31:06.754 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" Namespace="calico-apiserver" Pod="calico-apiserver-7cf8986b4f-ncwmj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cf8986b4f--ncwmj-eth0" Sep 9 21:31:06.759525 systemd-networkd[1435]: calia4927d83607: Gained IPv6LL Sep 9 21:31:06.768379 containerd[1526]: time="2025-09-09T21:31:06.768324150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j2xp,Uid:0c443cb4-c154-434b-8539-5fef8fd1056c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b\"" Sep 9 21:31:06.804732 containerd[1526]: time="2025-09-09T21:31:06.804479589Z" level=info msg="connecting to shim 353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89" address="unix:///run/containerd/s/3931b48a8cc51e1e19bd8d4c3c01fc60a7488b497f1c3a001ca44953c4ab3d1d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:06.835440 systemd[1]: Started cri-containerd-353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89.scope - libcontainer container 353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89. Sep 9 21:31:06.844410 systemd-networkd[1435]: cali0df2008cd51: Link UP Sep 9 21:31:06.845545 systemd-networkd[1435]: cali0df2008cd51: Gained carrier Sep 9 21:31:06.856073 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.555 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0 coredns-674b8bbfcf- kube-system 191f0b65-63b1-4ca5-96ba-7eb89927ac9c 824 0 2025-09-09 21:30:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pxf6p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0df2008cd51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.555 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.593 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" HandleID="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Workload="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.593 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" HandleID="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Workload="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac650), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pxf6p", "timestamp":"2025-09-09 21:31:06.593094561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.593 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.732 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.792 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.803 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.809 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.811 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.817 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.817 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.819 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.825 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.833 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.834 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" host="localhost" Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.834 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 21:31:06.875238 containerd[1526]: 2025-09-09 21:31:06.834 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" HandleID="k8s-pod-network.a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Workload="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.837 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"191f0b65-63b1-4ca5-96ba-7eb89927ac9c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pxf6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0df2008cd51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.837 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.837 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0df2008cd51 ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.844 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.848 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"191f0b65-63b1-4ca5-96ba-7eb89927ac9c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 21, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c", Pod:"coredns-674b8bbfcf-pxf6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0df2008cd51", MAC:"3e:f2:02:dc:31:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 21:31:06.875847 containerd[1526]: 2025-09-09 21:31:06.870 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxf6p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxf6p-eth0" Sep 9 21:31:06.896598 containerd[1526]: time="2025-09-09T21:31:06.896560082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf8986b4f-ncwmj,Uid:84004ccb-e3a6-4853-9973-88e70645f76b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89\"" Sep 9 21:31:06.911179 containerd[1526]: time="2025-09-09T21:31:06.911137537Z" level=info msg="connecting to shim a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c" address="unix:///run/containerd/s/283c9542a6a6eeeea90059f344417a52f76550a5a28537f9e64768b2ab00ef3c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:31:06.941519 systemd[1]: Started cri-containerd-a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c.scope - libcontainer container a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c. Sep 9 21:31:06.956110 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:31:06.976704 containerd[1526]: time="2025-09-09T21:31:06.976671879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxf6p,Uid:191f0b65-63b1-4ca5-96ba-7eb89927ac9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c\"" Sep 9 21:31:06.977876 kubelet[2677]: E0909 21:31:06.977778 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:06.984295 containerd[1526]: time="2025-09-09T21:31:06.983836790Z" level=info msg="CreateContainer within sandbox \"a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:31:06.991174 containerd[1526]: time="2025-09-09T21:31:06.991092553Z" level=info msg="Container 6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:06.998041 containerd[1526]: time="2025-09-09T21:31:06.997989266Z" level=info msg="CreateContainer within sandbox \"a1bd03672d71654349c6937668336ff5fb8a954bf57efc787a4a3909a37b898c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c\"" Sep 9 21:31:06.998590 containerd[1526]: time="2025-09-09T21:31:06.998566826Z" level=info msg="StartContainer for \"6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c\"" Sep 9 21:31:06.999404 containerd[1526]: time="2025-09-09T21:31:06.999373858Z" level=info msg="connecting to shim 6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c" address="unix:///run/containerd/s/283c9542a6a6eeeea90059f344417a52f76550a5a28537f9e64768b2ab00ef3c" protocol=ttrpc version=3 Sep 9 21:31:07.033440 systemd[1]: Started cri-containerd-6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c.scope - libcontainer container 6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c. Sep 9 21:31:07.152094 containerd[1526]: time="2025-09-09T21:31:07.152053474Z" level=info msg="StartContainer for \"6684b9d4d51aeb56d4e5bdf3c57501cf7ca42a9bb5bd6077bca9317498932e9c\" returns successfully" Sep 9 21:31:07.594784 containerd[1526]: time="2025-09-09T21:31:07.594745727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:07.595289 containerd[1526]: time="2025-09-09T21:31:07.595228152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 21:31:07.596102 containerd[1526]: time="2025-09-09T21:31:07.596056263Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:07.599037 containerd[1526]: time="2025-09-09T21:31:07.598856201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:07.599752 containerd[1526]: time="2025-09-09T21:31:07.599720557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.428321098s" Sep 9 21:31:07.599829 containerd[1526]: time="2025-09-09T21:31:07.599757882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 21:31:07.600963 containerd[1526]: time="2025-09-09T21:31:07.600928840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 21:31:07.604654 containerd[1526]: time="2025-09-09T21:31:07.604625898Z" level=info msg="CreateContainer within sandbox \"decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 21:31:07.611299 containerd[1526]: time="2025-09-09T21:31:07.611247990Z" level=info msg="Container bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:07.618467 containerd[1526]: time="2025-09-09T21:31:07.618425478Z" level=info msg="CreateContainer within sandbox \"decdf582617eb4216e228cf496c145e1554f6bdeb8d57f01677245da3410db56\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5\"" Sep 9 21:31:07.618943 containerd[1526]: time="2025-09-09T21:31:07.618920384Z" level=info msg="StartContainer for \"bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5\"" Sep 9 21:31:07.619981 containerd[1526]: time="2025-09-09T21:31:07.619956204Z" level=info msg="connecting to shim bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5" address="unix:///run/containerd/s/0bd2ccbeedd0c71e5feb390083aca61b73211f0d7fc5cf6b14f1d67b14940575" protocol=ttrpc version=3 Sep 9 21:31:07.641419 systemd[1]: Started cri-containerd-bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5.scope - libcontainer container bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5. Sep 9 21:31:07.655604 systemd-networkd[1435]: cali8274888bfe7: Gained IPv6LL Sep 9 21:31:07.659082 kubelet[2677]: E0909 21:31:07.659052 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:07.665631 kubelet[2677]: E0909 21:31:07.665608 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:07.671756 kubelet[2677]: I0909 21:31:07.671463 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pxf6p" podStartSLOduration=37.67143458 podStartE2EDuration="37.67143458s" podCreationTimestamp="2025-09-09 21:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:31:07.670441127 +0000 UTC m=+44.295378171" watchObservedRunningTime="2025-09-09 21:31:07.67143458 +0000 UTC m=+44.296371705" Sep 9 21:31:07.695770 containerd[1526]: time="2025-09-09T21:31:07.695733015Z" level=info msg="StartContainer for \"bc63c73bd440d4e72cfbd71e9d428f5af969a49baf7d409ceb3682d284efb9e5\" returns successfully" Sep 9 21:31:08.118707 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:49050.service - OpenSSH per-connection server daemon (10.0.0.1:49050). Sep 9 21:31:08.203372 sshd[5113]: Accepted publickey for core from 10.0.0.1 port 49050 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:08.205367 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:08.210082 systemd-logind[1504]: New session 9 of user core. Sep 9 21:31:08.217515 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:31:08.296963 systemd-networkd[1435]: cali0df2008cd51: Gained IPv6LL Sep 9 21:31:08.487385 systemd-networkd[1435]: caliaa6ebc28b7c: Gained IPv6LL Sep 9 21:31:08.521366 sshd[5116]: Connection closed by 10.0.0.1 port 49050 Sep 9 21:31:08.521721 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:08.525191 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:49050.service: Deactivated successfully. Sep 9 21:31:08.526918 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:31:08.527622 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:31:08.528914 systemd-logind[1504]: Removed session 9. Sep 9 21:31:08.615495 systemd-networkd[1435]: cali65ec6992b81: Gained IPv6LL Sep 9 21:31:08.669217 kubelet[2677]: E0909 21:31:08.669166 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:08.669721 kubelet[2677]: E0909 21:31:08.669669 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:08.679818 kubelet[2677]: I0909 21:31:08.679675 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf8986b4f-5v74p" podStartSLOduration=24.782821731 podStartE2EDuration="28.679663634s" podCreationTimestamp="2025-09-09 21:30:40 +0000 UTC" firstStartedPulling="2025-09-09 21:31:03.703732489 +0000 UTC m=+40.328669573" lastFinishedPulling="2025-09-09 21:31:07.600574392 +0000 UTC m=+44.225511476" observedRunningTime="2025-09-09 21:31:08.678831924 +0000 UTC m=+45.303769008" watchObservedRunningTime="2025-09-09 21:31:08.679663634 +0000 UTC m=+45.304600678" Sep 9 21:31:09.599028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659759637.mount: Deactivated successfully. Sep 9 21:31:09.671009 kubelet[2677]: I0909 21:31:09.670876 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 21:31:09.671399 kubelet[2677]: E0909 21:31:09.671296 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:10.047317 containerd[1526]: time="2025-09-09T21:31:10.047243263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:10.047936 containerd[1526]: time="2025-09-09T21:31:10.047884383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 21:31:10.048872 containerd[1526]: time="2025-09-09T21:31:10.048843863Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:10.051655 containerd[1526]: time="2025-09-09T21:31:10.051531921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.450565755s" Sep 9 21:31:10.051655 containerd[1526]: time="2025-09-09T21:31:10.051569685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 21:31:10.052480 containerd[1526]: time="2025-09-09T21:31:10.052432474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 21:31:10.059007 containerd[1526]: time="2025-09-09T21:31:10.058963893Z" level=info msg="CreateContainer within sandbox \"54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 21:31:10.060326 containerd[1526]: time="2025-09-09T21:31:10.060283058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:10.065794 containerd[1526]: time="2025-09-09T21:31:10.065755425Z" level=info msg="Container 1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:10.075390 containerd[1526]: time="2025-09-09T21:31:10.075353429Z" level=info msg="CreateContainer within sandbox \"54a805a729ade39ccecf7a630103d20d038e8820be24a878f3f0463f8c6c3614\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000\"" Sep 9 21:31:10.075804 containerd[1526]: time="2025-09-09T21:31:10.075782683Z" level=info msg="StartContainer for \"1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000\"" Sep 9 21:31:10.077147 containerd[1526]: time="2025-09-09T21:31:10.077118210Z" level=info msg="connecting to shim 1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000" address="unix:///run/containerd/s/71de474f76a37cb4cc1d615e619fce222161bc332436614406afa32f255f6f0a" protocol=ttrpc version=3 Sep 9 21:31:10.107484 systemd[1]: Started cri-containerd-1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000.scope - libcontainer container 1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000. Sep 9 21:31:10.150180 containerd[1526]: time="2025-09-09T21:31:10.150110327Z" level=info msg="StartContainer for \"1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000\" returns successfully" Sep 9 21:31:10.689452 kubelet[2677]: I0909 21:31:10.687349 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-8754q" podStartSLOduration=23.480115725 podStartE2EDuration="27.687331238s" podCreationTimestamp="2025-09-09 21:30:43 +0000 UTC" firstStartedPulling="2025-09-09 21:31:05.845062461 +0000 UTC m=+42.469999545" lastFinishedPulling="2025-09-09 21:31:10.052277974 +0000 UTC m=+46.677215058" observedRunningTime="2025-09-09 21:31:10.685746519 +0000 UTC m=+47.310683603" watchObservedRunningTime="2025-09-09 21:31:10.687331238 +0000 UTC m=+47.312268362" Sep 9 21:31:10.775283 containerd[1526]: time="2025-09-09T21:31:10.775047282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000\" id:\"6c4a87b23604cfbcadcfb021c92c54dbda6bde5c3189067b38045d9539598767\" pid:5192 exit_status:1 exited_at:{seconds:1757453470 nanos:774226819}" Sep 9 21:31:11.740048 containerd[1526]: time="2025-09-09T21:31:11.740006798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bf23cd57670e85abe9a7e64c5862efa34762f95aba01716c4a7e6f0903fd000\" id:\"bfed1416ffcdc9b8ef4fa33d5a179000da955c37a292b26908eec73857e28bf4\" pid:5217 exit_status:1 exited_at:{seconds:1757453471 nanos:739761528}" Sep 9 21:31:11.931901 containerd[1526]: time="2025-09-09T21:31:11.931860063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:11.932437 containerd[1526]: time="2025-09-09T21:31:11.932414771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 21:31:11.933299 containerd[1526]: time="2025-09-09T21:31:11.933276276Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:11.936166 containerd[1526]: time="2025-09-09T21:31:11.936131027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:11.937084 containerd[1526]: time="2025-09-09T21:31:11.936967569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.884487569s" Sep 9 21:31:11.937084 containerd[1526]: time="2025-09-09T21:31:11.936998373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 21:31:11.938108 containerd[1526]: time="2025-09-09T21:31:11.938002496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 21:31:11.941872 containerd[1526]: time="2025-09-09T21:31:11.941837207Z" level=info msg="CreateContainer within sandbox \"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 21:31:11.954157 containerd[1526]: time="2025-09-09T21:31:11.954119594Z" level=info msg="Container 612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:11.968179 containerd[1526]: time="2025-09-09T21:31:11.968125433Z" level=info msg="CreateContainer within sandbox \"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082\"" Sep 9 21:31:11.968874 containerd[1526]: time="2025-09-09T21:31:11.968847642Z" level=info msg="StartContainer for \"612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082\"" Sep 9 21:31:11.971463 containerd[1526]: time="2025-09-09T21:31:11.971433039Z" level=info msg="connecting to shim 612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082" address="unix:///run/containerd/s/aea3a30c3fb1f0de8a7e7022c416722f9fe7ee648b31e4069f39d1084b6fda0a" protocol=ttrpc version=3 Sep 9 21:31:11.993461 systemd[1]: Started cri-containerd-612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082.scope - libcontainer container 612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082. Sep 9 21:31:12.096374 containerd[1526]: time="2025-09-09T21:31:12.096335925Z" level=info msg="StartContainer for \"612370adbb5e983c3cc303df561fa16d251af988c151c6f4ca6e010c8fb92082\" returns successfully" Sep 9 21:31:12.313331 containerd[1526]: time="2025-09-09T21:31:12.313031845Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:12.313937 containerd[1526]: time="2025-09-09T21:31:12.313883067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 21:31:12.315977 containerd[1526]: time="2025-09-09T21:31:12.315940355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 377.875771ms" Sep 9 21:31:12.316028 containerd[1526]: time="2025-09-09T21:31:12.315974119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 21:31:12.316856 containerd[1526]: time="2025-09-09T21:31:12.316791657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 21:31:12.320313 containerd[1526]: time="2025-09-09T21:31:12.320256553Z" level=info msg="CreateContainer within sandbox \"353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 21:31:12.328293 containerd[1526]: time="2025-09-09T21:31:12.327436376Z" level=info msg="Container 8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:12.335482 containerd[1526]: time="2025-09-09T21:31:12.335433257Z" level=info msg="CreateContainer within sandbox \"353e0700ebe5c03867367a5bd604f7344f523a9358935d08b4114a077262ac89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81\"" Sep 9 21:31:12.336842 containerd[1526]: time="2025-09-09T21:31:12.336809223Z" level=info msg="StartContainer for \"8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81\"" Sep 9 21:31:12.338652 containerd[1526]: time="2025-09-09T21:31:12.338619280Z" level=info msg="connecting to shim 8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81" address="unix:///run/containerd/s/3931b48a8cc51e1e19bd8d4c3c01fc60a7488b497f1c3a001ca44953c4ab3d1d" protocol=ttrpc version=3 Sep 9 21:31:12.369472 systemd[1]: Started cri-containerd-8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81.scope - libcontainer container 8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81. Sep 9 21:31:12.407815 containerd[1526]: time="2025-09-09T21:31:12.407705982Z" level=info msg="StartContainer for \"8acc0acf9e6a776305315d10609f0abd12e071d5dce882e354500a620af28f81\" returns successfully" Sep 9 21:31:12.693470 kubelet[2677]: I0909 21:31:12.693360 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf8986b4f-ncwmj" podStartSLOduration=27.274892136 podStartE2EDuration="32.693344307s" podCreationTimestamp="2025-09-09 21:30:40 +0000 UTC" firstStartedPulling="2025-09-09 21:31:06.898279999 +0000 UTC m=+43.523217083" lastFinishedPulling="2025-09-09 21:31:12.31673217 +0000 UTC m=+48.941669254" observedRunningTime="2025-09-09 21:31:12.691489005 +0000 UTC m=+49.316426089" watchObservedRunningTime="2025-09-09 21:31:12.693344307 +0000 UTC m=+49.318281391" Sep 9 21:31:13.540504 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:44282.service - OpenSSH per-connection server daemon (10.0.0.1:44282). Sep 9 21:31:13.614807 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 44282 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:13.614486 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:13.622760 systemd-logind[1504]: New session 10 of user core. Sep 9 21:31:13.628483 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:31:13.815060 containerd[1526]: time="2025-09-09T21:31:13.814947264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:13.816202 containerd[1526]: time="2025-09-09T21:31:13.816116442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 21:31:13.817568 containerd[1526]: time="2025-09-09T21:31:13.817533889Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:13.820288 containerd[1526]: time="2025-09-09T21:31:13.820228126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:31:13.831966 containerd[1526]: time="2025-09-09T21:31:13.831612947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.514789766s" Sep 9 21:31:13.831966 containerd[1526]: time="2025-09-09T21:31:13.831871977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 21:31:13.838651 containerd[1526]: time="2025-09-09T21:31:13.838623733Z" level=info msg="CreateContainer within sandbox \"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 21:31:13.875797 sshd[5315]: Connection closed by 10.0.0.1 port 44282 Sep 9 21:31:13.876307 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:13.880290 containerd[1526]: time="2025-09-09T21:31:13.880225752Z" level=info msg="Container 2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:31:13.891348 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:44282.service: Deactivated successfully. Sep 9 21:31:13.894734 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:31:13.896186 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:31:13.898171 systemd-logind[1504]: Removed session 10. Sep 9 21:31:13.900006 containerd[1526]: time="2025-09-09T21:31:13.899882067Z" level=info msg="CreateContainer within sandbox \"0aa78aedcf85a262ded4965f18fb11446c12dbda7097ad8b8de03aafad2e780b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d\"" Sep 9 21:31:13.900167 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:44298.service - OpenSSH per-connection server daemon (10.0.0.1:44298). Sep 9 21:31:13.904554 containerd[1526]: time="2025-09-09T21:31:13.904474968Z" level=info msg="StartContainer for \"2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d\"" Sep 9 21:31:13.906811 containerd[1526]: time="2025-09-09T21:31:13.906780080Z" level=info msg="connecting to shim 2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d" address="unix:///run/containerd/s/aea3a30c3fb1f0de8a7e7022c416722f9fe7ee648b31e4069f39d1084b6fda0a" protocol=ttrpc version=3 Sep 9 21:31:13.930443 systemd[1]: Started cri-containerd-2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d.scope - libcontainer container 2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d. Sep 9 21:31:13.968936 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 44298 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:13.971138 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:13.976408 systemd-logind[1504]: New session 11 of user core. Sep 9 21:31:13.980452 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 21:31:13.981165 containerd[1526]: time="2025-09-09T21:31:13.981115955Z" level=info msg="StartContainer for \"2903079a7c0479ace8a5296ff4400db63b5b881eccdb545754ffa6d001a09c6d\" returns successfully" Sep 9 21:31:14.195773 sshd[5370]: Connection closed by 10.0.0.1 port 44298 Sep 9 21:31:14.194659 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:14.207902 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:44298.service: Deactivated successfully. Sep 9 21:31:14.212631 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 21:31:14.215312 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Sep 9 21:31:14.219710 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:44314.service - OpenSSH per-connection server daemon (10.0.0.1:44314). Sep 9 21:31:14.220239 systemd-logind[1504]: Removed session 11. Sep 9 21:31:14.278049 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 44314 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:14.279813 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:14.285680 systemd-logind[1504]: New session 12 of user core. Sep 9 21:31:14.290430 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 21:31:14.483535 sshd[5388]: Connection closed by 10.0.0.1 port 44314 Sep 9 21:31:14.482972 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:14.487187 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:44314.service: Deactivated successfully. Sep 9 21:31:14.489047 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 21:31:14.490786 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Sep 9 21:31:14.492117 systemd-logind[1504]: Removed session 12. Sep 9 21:31:14.558968 kubelet[2677]: I0909 21:31:14.558927 2677 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 21:31:14.566724 kubelet[2677]: I0909 21:31:14.566697 2677 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 21:31:14.709485 kubelet[2677]: I0909 21:31:14.709229 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6j2xp" podStartSLOduration=23.664833846 podStartE2EDuration="30.709212041s" podCreationTimestamp="2025-09-09 21:30:44 +0000 UTC" firstStartedPulling="2025-09-09 21:31:06.788544386 +0000 UTC m=+43.413481470" lastFinishedPulling="2025-09-09 21:31:13.832922581 +0000 UTC m=+50.457859665" observedRunningTime="2025-09-09 21:31:14.707793237 +0000 UTC m=+51.332730321" watchObservedRunningTime="2025-09-09 21:31:14.709212041 +0000 UTC m=+51.334149085" Sep 9 21:31:19.504354 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:44330.service - OpenSSH per-connection server daemon (10.0.0.1:44330). Sep 9 21:31:19.565440 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 44330 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:19.566658 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:19.570259 systemd-logind[1504]: New session 13 of user core. Sep 9 21:31:19.580396 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 21:31:19.767938 sshd[5417]: Connection closed by 10.0.0.1 port 44330 Sep 9 21:31:19.768537 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:19.787005 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:44330.service: Deactivated successfully. Sep 9 21:31:19.789044 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 21:31:19.789891 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Sep 9 21:31:19.793689 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:44342.service - OpenSSH per-connection server daemon (10.0.0.1:44342). Sep 9 21:31:19.794200 systemd-logind[1504]: Removed session 13. Sep 9 21:31:19.851578 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 44342 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:19.852749 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:19.857336 systemd-logind[1504]: New session 14 of user core. Sep 9 21:31:19.870401 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 21:31:20.104991 sshd[5435]: Connection closed by 10.0.0.1 port 44342 Sep 9 21:31:20.105373 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:20.116567 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:44342.service: Deactivated successfully. Sep 9 21:31:20.118194 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 21:31:20.118988 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Sep 9 21:31:20.121926 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:47986.service - OpenSSH per-connection server daemon (10.0.0.1:47986). Sep 9 21:31:20.122546 systemd-logind[1504]: Removed session 14. Sep 9 21:31:20.171709 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 47986 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:20.172767 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:20.176341 systemd-logind[1504]: New session 15 of user core. Sep 9 21:31:20.186466 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 21:31:20.800044 sshd[5449]: Connection closed by 10.0.0.1 port 47986 Sep 9 21:31:20.801873 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:20.809543 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:47986.service: Deactivated successfully. Sep 9 21:31:20.812100 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 21:31:20.817667 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Sep 9 21:31:20.822656 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:48002.service - OpenSSH per-connection server daemon (10.0.0.1:48002). Sep 9 21:31:20.823717 systemd-logind[1504]: Removed session 15. Sep 9 21:31:20.869613 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 48002 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:20.870799 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:20.874713 systemd-logind[1504]: New session 16 of user core. Sep 9 21:31:20.881439 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 21:31:21.152942 sshd[5472]: Connection closed by 10.0.0.1 port 48002 Sep 9 21:31:21.153361 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:21.166632 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:48002.service: Deactivated successfully. Sep 9 21:31:21.170082 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 21:31:21.171218 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Sep 9 21:31:21.174791 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Sep 9 21:31:21.175354 systemd-logind[1504]: Removed session 16. Sep 9 21:31:21.225438 sshd[5484]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:21.227210 sshd-session[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:21.231643 systemd-logind[1504]: New session 17 of user core. Sep 9 21:31:21.240556 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 21:31:21.370127 sshd[5487]: Connection closed by 10.0.0.1 port 48004 Sep 9 21:31:21.370039 sshd-session[5484]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:21.373691 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:48004.service: Deactivated successfully. Sep 9 21:31:21.375352 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 21:31:21.375961 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Sep 9 21:31:21.377250 systemd-logind[1504]: Removed session 17. Sep 9 21:31:25.136927 containerd[1526]: time="2025-09-09T21:31:25.136837172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\" id:\"c4a3142812383f6ef1603569f678e706cd576b5be6d928b52f0d25f11c8f179b\" pid:5520 exited_at:{seconds:1757453485 nanos:136591148}" Sep 9 21:31:26.382489 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Sep 9 21:31:26.428183 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:26.429378 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:26.435775 systemd-logind[1504]: New session 18 of user core. Sep 9 21:31:26.445441 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 21:31:26.564264 sshd[5534]: Connection closed by 10.0.0.1 port 48014 Sep 9 21:31:26.563708 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:26.567236 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:48014.service: Deactivated successfully. Sep 9 21:31:26.570747 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 21:31:26.572432 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Sep 9 21:31:26.573829 systemd-logind[1504]: Removed session 18. Sep 9 21:31:28.700158 containerd[1526]: time="2025-09-09T21:31:28.700117998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa97cccf50a05929a19c1d8491de7f85c13bac3924ed6354a5d18163580047e3\" id:\"6efd55d517e64bf1331d42662f5e50a4704dddc257f0c22c0ce4ffcaf454e364\" pid:5561 exited_at:{seconds:1757453488 nanos:699723680}" Sep 9 21:31:31.575573 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:42198.service - OpenSSH per-connection server daemon (10.0.0.1:42198). Sep 9 21:31:31.630394 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 42198 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:31.631810 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:31.635528 systemd-logind[1504]: New session 19 of user core. Sep 9 21:31:31.645463 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 21:31:31.816898 sshd[5580]: Connection closed by 10.0.0.1 port 42198 Sep 9 21:31:31.817239 sshd-session[5577]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:31.821435 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:42198.service: Deactivated successfully. Sep 9 21:31:31.823120 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 21:31:31.823925 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Sep 9 21:31:31.825034 systemd-logind[1504]: Removed session 19. Sep 9 21:31:35.487296 kubelet[2677]: E0909 21:31:35.487232 2677 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:31:35.688401 containerd[1526]: time="2025-09-09T21:31:35.688357259Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85f2cd441e5f14d1fe361e1928974d80f2049d664a9a1df0ad14e856181eca95\" id:\"0063be1a9c00117c86b903c155801d2605b5f6149588ecba34ba05597315f61b\" pid:5605 exited_at:{seconds:1757453495 nanos:688098161}" Sep 9 21:31:36.829478 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:42202.service - OpenSSH per-connection server daemon (10.0.0.1:42202). Sep 9 21:31:36.870816 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 42202 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:31:36.872139 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:31:36.876644 systemd-logind[1504]: New session 20 of user core. Sep 9 21:31:36.890488 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 21:31:37.006366 sshd[5620]: Connection closed by 10.0.0.1 port 42202 Sep 9 21:31:37.006938 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Sep 9 21:31:37.011415 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:42202.service: Deactivated successfully. Sep 9 21:31:37.013232 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 21:31:37.013893 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Sep 9 21:31:37.015348 systemd-logind[1504]: Removed session 20. Sep 9 21:31:38.338001 kubelet[2677]: I0909 21:31:38.337779 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"