Sep 11 23:58:58.764709 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 23:58:58.764729 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Sep 11 22:19:25 -00 2025 Sep 11 23:58:58.764739 kernel: KASLR enabled Sep 11 23:58:58.764744 kernel: efi: EFI v2.7 by EDK II Sep 11 23:58:58.764750 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 23:58:58.764755 kernel: random: crng init done Sep 11 23:58:58.764762 kernel: secureboot: Secure boot disabled Sep 11 23:58:58.764767 kernel: ACPI: Early table checksum verification disabled Sep 11 23:58:58.764784 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 23:58:58.764791 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 23:58:58.764797 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764803 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764808 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764814 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764821 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764828 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764834 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764840 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764846 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:58:58.764856 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 23:58:58.764862 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 23:58:58.764868 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:58:58.764874 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 23:58:58.764879 kernel: Zone ranges: Sep 11 23:58:58.764885 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:58:58.764892 kernel: DMA32 empty Sep 11 23:58:58.764898 kernel: Normal empty Sep 11 23:58:58.764904 kernel: Device empty Sep 11 23:58:58.764910 kernel: Movable zone start for each node Sep 11 23:58:58.764915 kernel: Early memory node ranges Sep 11 23:58:58.764921 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 23:58:58.764927 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 23:58:58.764933 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 23:58:58.764939 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 23:58:58.764945 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 23:58:58.764950 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 23:58:58.764956 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 23:58:58.764963 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 23:58:58.764969 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 23:58:58.764974 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 23:58:58.764983 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 23:58:58.764989 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 23:58:58.764995 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 23:58:58.765003 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:58:58.765009 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 23:58:58.765015 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 23:58:58.765021 kernel: psci: probing for conduit method from ACPI. Sep 11 23:58:58.765028 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 23:58:58.765034 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 23:58:58.765040 kernel: psci: Trusted OS migration not required Sep 11 23:58:58.765046 kernel: psci: SMC Calling Convention v1.1 Sep 11 23:58:58.765052 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 23:58:58.765059 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 23:58:58.765066 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 23:58:58.765073 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 23:58:58.765079 kernel: Detected PIPT I-cache on CPU0 Sep 11 23:58:58.765085 kernel: CPU features: detected: GIC system register CPU interface Sep 11 23:58:58.765091 kernel: CPU features: detected: Spectre-v4 Sep 11 23:58:58.765098 kernel: CPU features: detected: Spectre-BHB Sep 11 23:58:58.765104 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 23:58:58.765110 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 23:58:58.765117 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 23:58:58.765123 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 23:58:58.765140 kernel: alternatives: applying boot alternatives Sep 11 23:58:58.765148 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=482086cf30ef24f68ac7a1ade8cef289f4704fd240e7f8a80dce8eef21953880 Sep 11 23:58:58.765157 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 23:58:58.765163 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 23:58:58.765169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 23:58:58.765176 kernel: Fallback order for Node 0: 0 Sep 11 23:58:58.765182 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 23:58:58.765188 kernel: Policy zone: DMA Sep 11 23:58:58.765194 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 23:58:58.765201 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 23:58:58.765207 kernel: software IO TLB: area num 4. Sep 11 23:58:58.765213 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 23:58:58.765219 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 23:58:58.765227 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 23:58:58.765234 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 23:58:58.765240 kernel: rcu: RCU event tracing is enabled. Sep 11 23:58:58.765247 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 23:58:58.765253 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 23:58:58.765260 kernel: Tracing variant of Tasks RCU enabled. Sep 11 23:58:58.765266 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 23:58:58.765273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 23:58:58.765279 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:58:58.765286 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:58:58.765292 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 23:58:58.765300 kernel: GICv3: 256 SPIs implemented Sep 11 23:58:58.765306 kernel: GICv3: 0 Extended SPIs implemented Sep 11 23:58:58.765312 kernel: Root IRQ handler: gic_handle_irq Sep 11 23:58:58.765318 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 23:58:58.765325 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 23:58:58.765331 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 23:58:58.765337 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 23:58:58.765343 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 23:58:58.765350 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 23:58:58.765356 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 23:58:58.765362 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 23:58:58.765368 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 23:58:58.765376 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:58:58.765382 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 23:58:58.765389 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 23:58:58.765395 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 23:58:58.765401 kernel: arm-pv: using stolen time PV Sep 11 23:58:58.765408 kernel: Console: colour dummy device 80x25 Sep 11 23:58:58.765414 kernel: ACPI: Core revision 20240827 Sep 11 23:58:58.765421 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 23:58:58.765427 kernel: pid_max: default: 32768 minimum: 301 Sep 11 23:58:58.765434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 23:58:58.765441 kernel: landlock: Up and running. Sep 11 23:58:58.765448 kernel: SELinux: Initializing. Sep 11 23:58:58.765454 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:58:58.765461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:58:58.765467 kernel: rcu: Hierarchical SRCU implementation. Sep 11 23:58:58.765474 kernel: rcu: Max phase no-delay instances is 400. Sep 11 23:58:58.765480 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 23:58:58.765487 kernel: Remapping and enabling EFI services. Sep 11 23:58:58.765493 kernel: smp: Bringing up secondary CPUs ... Sep 11 23:58:58.765505 kernel: Detected PIPT I-cache on CPU1 Sep 11 23:58:58.765512 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 23:58:58.765519 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 23:58:58.765527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:58:58.765534 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 23:58:58.765541 kernel: Detected PIPT I-cache on CPU2 Sep 11 23:58:58.765548 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 23:58:58.765555 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 23:58:58.765563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:58:58.765569 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 23:58:58.765576 kernel: Detected PIPT I-cache on CPU3 Sep 11 23:58:58.765583 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 23:58:58.765590 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 23:58:58.765596 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:58:58.765603 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 23:58:58.765610 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 23:58:58.765617 kernel: SMP: Total of 4 processors activated. Sep 11 23:58:58.765625 kernel: CPU: All CPU(s) started at EL1 Sep 11 23:58:58.765631 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 23:58:58.765638 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 23:58:58.765645 kernel: CPU features: detected: Common not Private translations Sep 11 23:58:58.765652 kernel: CPU features: detected: CRC32 instructions Sep 11 23:58:58.765659 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 23:58:58.765665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 23:58:58.765672 kernel: CPU features: detected: LSE atomic instructions Sep 11 23:58:58.765679 kernel: CPU features: detected: Privileged Access Never Sep 11 23:58:58.765687 kernel: CPU features: detected: RAS Extension Support Sep 11 23:58:58.765694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 23:58:58.765701 kernel: alternatives: applying system-wide alternatives Sep 11 23:58:58.765707 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 23:58:58.765715 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 11 23:58:58.765721 kernel: devtmpfs: initialized Sep 11 23:58:58.765728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 23:58:58.765735 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 23:58:58.765742 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 23:58:58.765750 kernel: 0 pages in range for non-PLT usage Sep 11 23:58:58.765756 kernel: 508560 pages in range for PLT usage Sep 11 23:58:58.765763 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 23:58:58.765774 kernel: SMBIOS 3.0.0 present. Sep 11 23:58:58.765782 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 23:58:58.765789 kernel: DMI: Memory slots populated: 1/1 Sep 11 23:58:58.765796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 23:58:58.765803 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 23:58:58.765810 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 23:58:58.765818 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 23:58:58.765825 kernel: audit: initializing netlink subsys (disabled) Sep 11 23:58:58.765832 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 11 23:58:58.765839 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 23:58:58.765846 kernel: cpuidle: using governor menu Sep 11 23:58:58.765852 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 23:58:58.765859 kernel: ASID allocator initialised with 32768 entries Sep 11 23:58:58.765866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 23:58:58.765873 kernel: Serial: AMBA PL011 UART driver Sep 11 23:58:58.765881 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 23:58:58.765888 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 23:58:58.765894 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 23:58:58.765901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 23:58:58.765908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 23:58:58.765915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 23:58:58.765921 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 23:58:58.765928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 23:58:58.765935 kernel: ACPI: Added _OSI(Module Device) Sep 11 23:58:58.765943 kernel: ACPI: Added _OSI(Processor Device) Sep 11 23:58:58.765950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 23:58:58.765956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 23:58:58.765963 kernel: ACPI: Interpreter enabled Sep 11 23:58:58.765970 kernel: ACPI: Using GIC for interrupt routing Sep 11 23:58:58.765976 kernel: ACPI: MCFG table detected, 1 entries Sep 11 23:58:58.765983 kernel: ACPI: CPU0 has been hot-added Sep 11 23:58:58.765990 kernel: ACPI: CPU1 has been hot-added Sep 11 23:58:58.765996 kernel: ACPI: CPU2 has been hot-added Sep 11 23:58:58.766003 kernel: ACPI: CPU3 has been hot-added Sep 11 23:58:58.766011 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 23:58:58.766018 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 23:58:58.766025 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 23:58:58.766198 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 23:58:58.766278 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 23:58:58.766348 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 23:58:58.766410 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 23:58:58.766484 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 23:58:58.766493 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 23:58:58.766500 kernel: PCI host bridge to bus 0000:00 Sep 11 23:58:58.766564 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 23:58:58.766655 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 23:58:58.766711 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 23:58:58.766763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 23:58:58.766851 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 23:58:58.766923 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 23:58:58.766985 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 23:58:58.767044 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 23:58:58.767103 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 23:58:58.767177 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 23:58:58.767238 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 23:58:58.767299 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 23:58:58.767352 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 23:58:58.767403 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 23:58:58.767455 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 23:58:58.767464 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 23:58:58.767471 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 23:58:58.767478 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 23:58:58.767486 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 23:58:58.767493 kernel: iommu: Default domain type: Translated Sep 11 23:58:58.767500 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 23:58:58.767507 kernel: efivars: Registered efivars operations Sep 11 23:58:58.767514 kernel: vgaarb: loaded Sep 11 23:58:58.767520 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 23:58:58.767527 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 23:58:58.767534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 23:58:58.767541 kernel: pnp: PnP ACPI init Sep 11 23:58:58.767611 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 23:58:58.767621 kernel: pnp: PnP ACPI: found 1 devices Sep 11 23:58:58.767628 kernel: NET: Registered PF_INET protocol family Sep 11 23:58:58.767635 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 23:58:58.767642 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 23:58:58.767649 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 23:58:58.767656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 23:58:58.767663 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 23:58:58.767671 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 23:58:58.767678 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:58:58.767685 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:58:58.767692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 23:58:58.767699 kernel: PCI: CLS 0 bytes, default 64 Sep 11 23:58:58.767705 kernel: kvm [1]: HYP mode not available Sep 11 23:58:58.767712 kernel: Initialise system trusted keyrings Sep 11 23:58:58.767719 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 23:58:58.767726 kernel: Key type asymmetric registered Sep 11 23:58:58.767732 kernel: Asymmetric key parser 'x509' registered Sep 11 23:58:58.767745 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 23:58:58.767753 kernel: io scheduler mq-deadline registered Sep 11 23:58:58.767759 kernel: io scheduler kyber registered Sep 11 23:58:58.767766 kernel: io scheduler bfq registered Sep 11 23:58:58.767778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 23:58:58.767786 kernel: ACPI: button: Power Button [PWRB] Sep 11 23:58:58.767793 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 23:58:58.767854 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 23:58:58.767864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 23:58:58.767872 kernel: thunder_xcv, ver 1.0 Sep 11 23:58:58.767883 kernel: thunder_bgx, ver 1.0 Sep 11 23:58:58.767892 kernel: nicpf, ver 1.0 Sep 11 23:58:58.767899 kernel: nicvf, ver 1.0 Sep 11 23:58:58.767967 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 23:58:58.768023 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T23:58:58 UTC (1757635138) Sep 11 23:58:58.768032 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 23:58:58.768039 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 23:58:58.768047 kernel: NET: Registered PF_INET6 protocol family Sep 11 23:58:58.768054 kernel: watchdog: NMI not fully supported Sep 11 23:58:58.768061 kernel: watchdog: Hard watchdog permanently disabled Sep 11 23:58:58.768068 kernel: Segment Routing with IPv6 Sep 11 23:58:58.768074 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 23:58:58.768081 kernel: NET: Registered PF_PACKET protocol family Sep 11 23:58:58.768088 kernel: Key type dns_resolver registered Sep 11 23:58:58.768095 kernel: registered taskstats version 1 Sep 11 23:58:58.768101 kernel: Loading compiled-in X.509 certificates Sep 11 23:58:58.768109 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 586029c251a407eb16ec614b204f62df0d61537f' Sep 11 23:58:58.768116 kernel: Demotion targets for Node 0: null Sep 11 23:58:58.768123 kernel: Key type .fscrypt registered Sep 11 23:58:58.768150 kernel: Key type fscrypt-provisioning registered Sep 11 23:58:58.768159 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 23:58:58.768166 kernel: ima: Allocated hash algorithm: sha1 Sep 11 23:58:58.768173 kernel: ima: No architecture policies found Sep 11 23:58:58.768180 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 23:58:58.768190 kernel: clk: Disabling unused clocks Sep 11 23:58:58.768197 kernel: PM: genpd: Disabling unused power domains Sep 11 23:58:58.768203 kernel: Warning: unable to open an initial console. Sep 11 23:58:58.768211 kernel: Freeing unused kernel memory: 38976K Sep 11 23:58:58.768217 kernel: Run /init as init process Sep 11 23:58:58.768224 kernel: with arguments: Sep 11 23:58:58.768231 kernel: /init Sep 11 23:58:58.768237 kernel: with environment: Sep 11 23:58:58.768244 kernel: HOME=/ Sep 11 23:58:58.768250 kernel: TERM=linux Sep 11 23:58:58.768258 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 23:58:58.768266 systemd[1]: Successfully made /usr/ read-only. Sep 11 23:58:58.768276 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:58:58.768284 systemd[1]: Detected virtualization kvm. Sep 11 23:58:58.768291 systemd[1]: Detected architecture arm64. Sep 11 23:58:58.768298 systemd[1]: Running in initrd. Sep 11 23:58:58.768305 systemd[1]: No hostname configured, using default hostname. Sep 11 23:58:58.768314 systemd[1]: Hostname set to . Sep 11 23:58:58.768322 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:58:58.768329 systemd[1]: Queued start job for default target initrd.target. Sep 11 23:58:58.768336 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:58:58.768344 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:58:58.768351 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 23:58:58.768359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:58:58.768366 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 23:58:58.768376 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 23:58:58.768384 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 23:58:58.768391 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 23:58:58.768399 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:58:58.768406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:58:58.768414 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:58:58.768421 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:58:58.768429 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:58:58.768437 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:58:58.768444 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:58:58.768452 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:58:58.768459 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 23:58:58.768466 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 23:58:58.768474 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:58:58.768481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:58:58.768490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:58:58.768497 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:58:58.768504 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 23:58:58.768512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:58:58.768519 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 23:58:58.768527 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 23:58:58.768534 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 23:58:58.768542 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:58:58.768549 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:58:58.768557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:58:58.768565 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 23:58:58.768573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:58:58.768580 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 23:58:58.768589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:58:58.768613 systemd-journald[245]: Collecting audit messages is disabled. Sep 11 23:58:58.768631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:58:58.768639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 23:58:58.768648 systemd-journald[245]: Journal started Sep 11 23:58:58.768666 systemd-journald[245]: Runtime Journal (/run/log/journal/6d4f2c2d8af0445c978ef85d6e7a813f) is 6M, max 48.5M, 42.4M free. Sep 11 23:58:58.774235 kernel: Bridge firewalling registered Sep 11 23:58:58.755428 systemd-modules-load[246]: Inserted module 'overlay' Sep 11 23:58:58.769843 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 11 23:58:58.778159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 23:58:58.778176 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:58:58.779183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:58:58.781063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:58:58.783881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:58:58.785331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:58:58.786725 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:58:58.796262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:58:58.797857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 23:58:58.802739 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 23:58:58.805177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:58:58.808218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:58:58.809199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:58:58.812512 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:58:58.814948 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=482086cf30ef24f68ac7a1ade8cef289f4704fd240e7f8a80dce8eef21953880 Sep 11 23:58:58.846889 systemd-resolved[301]: Positive Trust Anchors: Sep 11 23:58:58.846906 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:58:58.846938 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:58:58.851730 systemd-resolved[301]: Defaulting to hostname 'linux'. Sep 11 23:58:58.852609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:58:58.855200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:58:58.887153 kernel: SCSI subsystem initialized Sep 11 23:58:58.891148 kernel: Loading iSCSI transport class v2.0-870. Sep 11 23:58:58.899166 kernel: iscsi: registered transport (tcp) Sep 11 23:58:58.911296 kernel: iscsi: registered transport (qla4xxx) Sep 11 23:58:58.911336 kernel: QLogic iSCSI HBA Driver Sep 11 23:58:58.927220 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:58:58.952175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:58:58.953983 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:58:58.991098 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 23:58:58.993123 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 23:58:59.051184 kernel: raid6: neonx8 gen() 15533 MB/s Sep 11 23:58:59.068171 kernel: raid6: neonx4 gen() 15287 MB/s Sep 11 23:58:59.085162 kernel: raid6: neonx2 gen() 12966 MB/s Sep 11 23:58:59.102177 kernel: raid6: neonx1 gen() 10240 MB/s Sep 11 23:58:59.119154 kernel: raid6: int64x8 gen() 6886 MB/s Sep 11 23:58:59.136173 kernel: raid6: int64x4 gen() 7327 MB/s Sep 11 23:58:59.153161 kernel: raid6: int64x2 gen() 6092 MB/s Sep 11 23:58:59.170151 kernel: raid6: int64x1 gen() 5052 MB/s Sep 11 23:58:59.170182 kernel: raid6: using algorithm neonx8 gen() 15533 MB/s Sep 11 23:58:59.187162 kernel: raid6: .... xor() 12042 MB/s, rmw enabled Sep 11 23:58:59.187192 kernel: raid6: using neon recovery algorithm Sep 11 23:58:59.192146 kernel: xor: measuring software checksum speed Sep 11 23:58:59.192165 kernel: 8regs : 21653 MB/sec Sep 11 23:58:59.193162 kernel: 32regs : 19735 MB/sec Sep 11 23:58:59.193180 kernel: arm64_neon : 27204 MB/sec Sep 11 23:58:59.193189 kernel: xor: using function: arm64_neon (27204 MB/sec) Sep 11 23:58:59.245178 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 23:58:59.252164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:58:59.254319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:58:59.286236 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 11 23:58:59.290413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:58:59.292062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 23:58:59.315308 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 11 23:58:59.336015 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:58:59.338253 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:58:59.391737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:58:59.394065 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 23:58:59.443153 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 23:58:59.450852 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 23:58:59.452804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:58:59.457803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 23:58:59.457826 kernel: GPT:9289727 != 19775487 Sep 11 23:58:59.457835 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 23:58:59.457844 kernel: GPT:9289727 != 19775487 Sep 11 23:58:59.457852 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 23:58:59.457861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:58:59.452918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:58:59.456816 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:58:59.464383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:58:59.484290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 23:58:59.491525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 23:58:59.492666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:58:59.503569 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 23:58:59.514619 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 23:58:59.515603 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 23:58:59.524350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:58:59.525288 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:58:59.526927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:58:59.528604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:58:59.530794 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 23:58:59.532472 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 23:58:59.556447 disk-uuid[592]: Primary Header is updated. Sep 11 23:58:59.556447 disk-uuid[592]: Secondary Entries is updated. Sep 11 23:58:59.556447 disk-uuid[592]: Secondary Header is updated. Sep 11 23:58:59.561168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:58:59.561622 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:59:00.568332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:59:00.571027 disk-uuid[595]: The operation has completed successfully. Sep 11 23:59:00.603396 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 23:59:00.603506 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 23:59:00.625529 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 23:59:00.654180 sh[611]: Success Sep 11 23:59:00.665490 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 23:59:00.665524 kernel: device-mapper: uevent: version 1.0.3 Sep 11 23:59:00.666382 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 23:59:00.674195 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 23:59:00.700910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 23:59:00.703484 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 23:59:00.716297 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 23:59:00.724816 kernel: BTRFS: device fsid b46dc80b-5663-423a-b9f6-4361968007e2 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (623) Sep 11 23:59:00.724864 kernel: BTRFS info (device dm-0): first mount of filesystem b46dc80b-5663-423a-b9f6-4361968007e2 Sep 11 23:59:00.724875 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:59:00.729350 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 23:59:00.729390 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 23:59:00.730401 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 23:59:00.731417 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:59:00.732552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 23:59:00.733262 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 23:59:00.735984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 23:59:00.760149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 11 23:59:00.762138 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:59:00.762186 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:59:00.764559 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:59:00.764597 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:59:00.769166 kernel: BTRFS info (device vda6): last unmount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:59:00.770120 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 23:59:00.772001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 23:59:00.835105 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:59:00.840371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:59:00.874207 ignition[703]: Ignition 2.21.0 Sep 11 23:59:00.874220 ignition[703]: Stage: fetch-offline Sep 11 23:59:00.874253 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:00.874261 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:00.874425 ignition[703]: parsed url from cmdline: "" Sep 11 23:59:00.874428 ignition[703]: no config URL provided Sep 11 23:59:00.874432 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 23:59:00.874439 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 11 23:59:00.874458 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 11 23:59:00.874462 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 23:59:00.879381 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 11 23:59:00.885350 systemd-networkd[801]: lo: Link UP Sep 11 23:59:00.885362 systemd-networkd[801]: lo: Gained carrier Sep 11 23:59:00.886118 systemd-networkd[801]: Enumeration completed Sep 11 23:59:00.886949 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:59:00.886953 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:59:00.887194 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:59:00.888077 systemd-networkd[801]: eth0: Link UP Sep 11 23:59:00.888261 systemd-networkd[801]: eth0: Gained carrier Sep 11 23:59:00.888271 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:59:00.888882 systemd[1]: Reached target network.target - Network. Sep 11 23:59:00.918191 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:59:00.934678 ignition[703]: parsing config with SHA512: 0dc58b872a27ea1a86959b3f7c5de4d92dc5c07087f9a4a7e82cbbbe9509a989b84bdd9339d2bf9b62c62420b1e46eb15d5a38788ac58b068f21bce962db31de Sep 11 23:59:00.941476 unknown[703]: fetched base config from "system" Sep 11 23:59:00.941490 unknown[703]: fetched user config from "qemu" Sep 11 23:59:00.941997 ignition[703]: fetch-offline: fetch-offline passed Sep 11 23:59:00.942061 ignition[703]: Ignition finished successfully Sep 11 23:59:00.943884 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:59:00.945185 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 23:59:00.945927 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 23:59:00.974517 ignition[809]: Ignition 2.21.0 Sep 11 23:59:00.974533 ignition[809]: Stage: kargs Sep 11 23:59:00.974743 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:00.974754 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:00.976211 ignition[809]: kargs: kargs passed Sep 11 23:59:00.976266 ignition[809]: Ignition finished successfully Sep 11 23:59:00.978952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 23:59:00.982291 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 23:59:01.007820 ignition[817]: Ignition 2.21.0 Sep 11 23:59:01.007834 ignition[817]: Stage: disks Sep 11 23:59:01.007956 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:01.007965 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:01.009447 ignition[817]: disks: disks passed Sep 11 23:59:01.011071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 23:59:01.009506 ignition[817]: Ignition finished successfully Sep 11 23:59:01.012295 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 23:59:01.013645 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 23:59:01.015026 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:59:01.016495 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:59:01.017929 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:59:01.020160 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 23:59:01.053469 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 23:59:01.058368 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 23:59:01.060344 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 23:59:01.125045 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 23:59:01.126302 kernel: EXT4-fs (vda9): mounted filesystem f6e22e61-f8f0-470c-befd-91d703c5ae2a r/w with ordered data mode. Quota mode: none. Sep 11 23:59:01.126172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 23:59:01.128147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:59:01.130187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 23:59:01.130948 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 23:59:01.130991 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 23:59:01.131013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:59:01.146449 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 23:59:01.148227 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 23:59:01.152540 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Sep 11 23:59:01.152576 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:59:01.152587 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:59:01.156314 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:59:01.156353 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:59:01.157323 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:59:01.185502 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 23:59:01.188840 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Sep 11 23:59:01.192758 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 23:59:01.196301 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 23:59:01.257773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 23:59:01.259866 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 23:59:01.261328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 23:59:01.281150 kernel: BTRFS info (device vda6): last unmount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:59:01.292259 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 23:59:01.303360 ignition[949]: INFO : Ignition 2.21.0 Sep 11 23:59:01.303360 ignition[949]: INFO : Stage: mount Sep 11 23:59:01.305421 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:01.305421 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:01.307026 ignition[949]: INFO : mount: mount passed Sep 11 23:59:01.307026 ignition[949]: INFO : Ignition finished successfully Sep 11 23:59:01.307825 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 23:59:01.309752 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 23:59:01.864017 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 23:59:01.865563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:59:01.895427 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (961) Sep 11 23:59:01.895463 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:59:01.895474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:59:01.898152 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:59:01.898181 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:59:01.899654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:59:01.928460 ignition[978]: INFO : Ignition 2.21.0 Sep 11 23:59:01.928460 ignition[978]: INFO : Stage: files Sep 11 23:59:01.930183 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:01.930183 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:01.932229 ignition[978]: DEBUG : files: compiled without relabeling support, skipping Sep 11 23:59:01.933225 ignition[978]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 23:59:01.933225 ignition[978]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 23:59:01.935757 ignition[978]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 23:59:01.936796 ignition[978]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 23:59:01.936796 ignition[978]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 23:59:01.936286 unknown[978]: wrote ssh authorized keys file for user: core Sep 11 23:59:01.940009 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 11 23:59:01.941569 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 11 23:59:01.955265 systemd-networkd[801]: eth0: Gained IPv6LL Sep 11 23:59:01.986694 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 23:59:02.224065 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:59:02.225803 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 23:59:02.237342 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 11 23:59:02.653203 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 23:59:03.332942 ignition[978]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 23:59:03.332942 ignition[978]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 23:59:03.336245 ignition[978]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 23:59:03.349918 ignition[978]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:59:03.353493 ignition[978]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:59:03.354792 ignition[978]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 23:59:03.354792 ignition[978]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 23:59:03.354792 ignition[978]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 23:59:03.354792 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:59:03.354792 ignition[978]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:59:03.354792 ignition[978]: INFO : files: files passed Sep 11 23:59:03.354792 ignition[978]: INFO : Ignition finished successfully Sep 11 23:59:03.357639 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 23:59:03.359886 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 23:59:03.385290 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 23:59:03.387879 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 23:59:03.388000 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 23:59:03.391977 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 23:59:03.395246 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:59:03.395246 initrd-setup-root-after-ignition[1009]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:59:03.397888 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:59:03.397219 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:59:03.399100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 23:59:03.401776 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 23:59:03.455118 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 23:59:03.455276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 23:59:03.457826 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 23:59:03.459016 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 23:59:03.460490 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 23:59:03.461245 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 23:59:03.483198 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:59:03.485350 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 23:59:03.507690 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:59:03.508712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:59:03.510218 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 23:59:03.511649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 23:59:03.511768 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:59:03.513886 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 23:59:03.515313 systemd[1]: Stopped target basic.target - Basic System. Sep 11 23:59:03.516526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 23:59:03.517849 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:59:03.519302 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 23:59:03.520789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:59:03.522291 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 23:59:03.523714 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:59:03.525198 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 23:59:03.526839 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 23:59:03.528117 systemd[1]: Stopped target swap.target - Swaps. Sep 11 23:59:03.529316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 23:59:03.529437 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:59:03.531226 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:59:03.532705 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:59:03.534099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 23:59:03.535239 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:59:03.536790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 23:59:03.536914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 23:59:03.539094 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 23:59:03.539240 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:59:03.540882 systemd[1]: Stopped target paths.target - Path Units. Sep 11 23:59:03.542014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 23:59:03.545197 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:59:03.546188 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 23:59:03.547929 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 23:59:03.549162 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 23:59:03.549252 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:59:03.550614 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 23:59:03.550689 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:59:03.552012 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 23:59:03.552122 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:59:03.553705 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 23:59:03.553818 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 23:59:03.555897 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 23:59:03.557050 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 23:59:03.557198 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:59:03.559738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 23:59:03.560795 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 23:59:03.560918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:59:03.562440 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 23:59:03.562553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:59:03.567951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 23:59:03.568035 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 23:59:03.574417 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 23:59:03.582435 ignition[1033]: INFO : Ignition 2.21.0 Sep 11 23:59:03.582435 ignition[1033]: INFO : Stage: umount Sep 11 23:59:03.584464 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:59:03.584464 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:59:03.584464 ignition[1033]: INFO : umount: umount passed Sep 11 23:59:03.584464 ignition[1033]: INFO : Ignition finished successfully Sep 11 23:59:03.588508 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 23:59:03.588636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 23:59:03.590018 systemd[1]: Stopped target network.target - Network. Sep 11 23:59:03.591148 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 23:59:03.591202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 23:59:03.592638 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 23:59:03.592677 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 23:59:03.594095 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 23:59:03.594162 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 23:59:03.595684 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 23:59:03.595723 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 23:59:03.597073 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 23:59:03.598386 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 23:59:03.609084 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 23:59:03.609241 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 23:59:03.612317 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 23:59:03.612603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 23:59:03.612639 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:59:03.615625 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 23:59:03.615828 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 23:59:03.615915 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 23:59:03.618900 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 23:59:03.619334 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 23:59:03.620786 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 23:59:03.620820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:59:03.623145 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 23:59:03.623995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 23:59:03.624050 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:59:03.626722 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 23:59:03.626781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:59:03.629839 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 23:59:03.629886 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 23:59:03.630830 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:59:03.635240 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 23:59:03.635490 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 23:59:03.636489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 23:59:03.639823 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 23:59:03.639920 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 23:59:03.643940 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 23:59:03.644096 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:59:03.646094 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 23:59:03.646205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 23:59:03.650696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 23:59:03.650771 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 23:59:03.651738 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 23:59:03.654636 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:59:03.656419 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 23:59:03.656475 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:59:03.659236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 23:59:03.659285 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 23:59:03.665282 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 23:59:03.665343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:59:03.672269 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 23:59:03.674150 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 23:59:03.674951 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:59:03.677177 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 23:59:03.677222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:59:03.679966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:59:03.680016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:59:03.692941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 23:59:03.693057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 23:59:03.694869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 23:59:03.696801 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 23:59:03.714575 systemd[1]: Switching root. Sep 11 23:59:03.742242 systemd-journald[245]: Journal stopped Sep 11 23:59:04.549587 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 11 23:59:04.549636 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 23:59:04.549653 kernel: SELinux: policy capability open_perms=1 Sep 11 23:59:04.549662 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 23:59:04.549674 kernel: SELinux: policy capability always_check_network=0 Sep 11 23:59:04.549684 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 23:59:04.549693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 23:59:04.549702 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 23:59:04.549712 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 23:59:04.549721 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 23:59:04.549731 kernel: audit: type=1403 audit(1757635143.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 23:59:04.549741 systemd[1]: Successfully loaded SELinux policy in 49.517ms. Sep 11 23:59:04.549773 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.928ms. Sep 11 23:59:04.549788 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:59:04.549799 systemd[1]: Detected virtualization kvm. Sep 11 23:59:04.549809 systemd[1]: Detected architecture arm64. Sep 11 23:59:04.549818 systemd[1]: Detected first boot. Sep 11 23:59:04.549830 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:59:04.549840 zram_generator::config[1079]: No configuration found. Sep 11 23:59:04.549851 kernel: NET: Registered PF_VSOCK protocol family Sep 11 23:59:04.549860 systemd[1]: Populated /etc with preset unit settings. Sep 11 23:59:04.549871 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 23:59:04.549884 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 23:59:04.549895 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 23:59:04.549905 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 23:59:04.549916 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 23:59:04.549927 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 23:59:04.549937 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 23:59:04.549947 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 23:59:04.549956 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 23:59:04.549966 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 23:59:04.549976 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 23:59:04.549986 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 23:59:04.549997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:59:04.550009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:59:04.550019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 23:59:04.550029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 23:59:04.550039 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 23:59:04.550049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:59:04.550058 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 23:59:04.550068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:59:04.550080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:59:04.550090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 23:59:04.550099 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 23:59:04.550109 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 23:59:04.550119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 23:59:04.550155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:59:04.550167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:59:04.550177 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:59:04.550187 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:59:04.550200 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 23:59:04.550210 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 23:59:04.550220 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 23:59:04.550230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:59:04.550239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:59:04.550251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:59:04.550261 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 23:59:04.550271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 23:59:04.550281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 23:59:04.550293 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 23:59:04.550303 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 23:59:04.550313 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 23:59:04.550322 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 23:59:04.550333 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 23:59:04.550347 systemd[1]: Reached target machines.target - Containers. Sep 11 23:59:04.550357 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 23:59:04.550367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:59:04.550377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:59:04.550388 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 23:59:04.550398 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:59:04.550408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:59:04.550418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:59:04.550428 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 23:59:04.550438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:59:04.550448 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 23:59:04.550458 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 23:59:04.550471 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 23:59:04.550482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 23:59:04.550492 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 23:59:04.550502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:59:04.550512 kernel: fuse: init (API version 7.41) Sep 11 23:59:04.550521 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:59:04.550531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:59:04.550541 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:59:04.550551 kernel: ACPI: bus type drm_connector registered Sep 11 23:59:04.550562 kernel: loop: module loaded Sep 11 23:59:04.550571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 23:59:04.550582 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 23:59:04.550592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:59:04.550602 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 23:59:04.550613 systemd[1]: Stopped verity-setup.service. Sep 11 23:59:04.550623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 23:59:04.550633 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 23:59:04.550644 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 23:59:04.550654 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 23:59:04.550664 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 23:59:04.550673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 23:59:04.550707 systemd-journald[1154]: Collecting audit messages is disabled. Sep 11 23:59:04.550735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 23:59:04.550747 systemd-journald[1154]: Journal started Sep 11 23:59:04.550774 systemd-journald[1154]: Runtime Journal (/run/log/journal/6d4f2c2d8af0445c978ef85d6e7a813f) is 6M, max 48.5M, 42.4M free. Sep 11 23:59:04.324808 systemd[1]: Queued start job for default target multi-user.target. Sep 11 23:59:04.345175 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 23:59:04.345572 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 23:59:04.552677 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:59:04.554183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:59:04.555323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 23:59:04.555490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 23:59:04.558451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:59:04.558627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:59:04.559748 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:59:04.559923 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:59:04.561012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:59:04.561185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:59:04.562296 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 23:59:04.562436 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 23:59:04.563605 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:59:04.563767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:59:04.564897 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:59:04.566040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:59:04.567504 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 23:59:04.568709 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 23:59:04.580200 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:59:04.582328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 23:59:04.584053 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 23:59:04.584999 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 23:59:04.585046 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:59:04.586797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 23:59:04.599026 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 23:59:04.600275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:59:04.601767 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 23:59:04.603649 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 23:59:04.604700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:59:04.605891 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 23:59:04.606806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:59:04.608864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:59:04.611363 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 23:59:04.616566 systemd-journald[1154]: Time spent on flushing to /var/log/journal/6d4f2c2d8af0445c978ef85d6e7a813f is 23.099ms for 884 entries. Sep 11 23:59:04.616566 systemd-journald[1154]: System Journal (/var/log/journal/6d4f2c2d8af0445c978ef85d6e7a813f) is 8M, max 195.6M, 187.6M free. Sep 11 23:59:04.649823 systemd-journald[1154]: Received client request to flush runtime journal. Sep 11 23:59:04.649870 kernel: loop0: detected capacity change from 0 to 138376 Sep 11 23:59:04.649883 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 23:59:04.614279 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 23:59:04.617617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:59:04.618923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 23:59:04.620189 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 23:59:04.623272 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 23:59:04.631059 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 23:59:04.634295 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 23:59:04.641178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:59:04.656891 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 23:59:04.663159 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 23:59:04.666524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:59:04.673155 kernel: loop1: detected capacity change from 0 to 211168 Sep 11 23:59:04.676570 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 23:59:04.692411 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 11 23:59:04.692430 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 11 23:59:04.696985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:59:04.699167 kernel: loop2: detected capacity change from 0 to 107312 Sep 11 23:59:04.729198 kernel: loop3: detected capacity change from 0 to 138376 Sep 11 23:59:04.736529 kernel: loop4: detected capacity change from 0 to 211168 Sep 11 23:59:04.742180 kernel: loop5: detected capacity change from 0 to 107312 Sep 11 23:59:04.748147 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 23:59:04.748534 (sd-merge)[1219]: Merged extensions into '/usr'. Sep 11 23:59:04.769966 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 23:59:04.770118 systemd[1]: Reloading... Sep 11 23:59:04.826494 zram_generator::config[1248]: No configuration found. Sep 11 23:59:04.908726 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 23:59:04.910840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:59:04.973593 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 23:59:04.973822 systemd[1]: Reloading finished in 203 ms. Sep 11 23:59:04.990779 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 23:59:04.991999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 23:59:05.006585 systemd[1]: Starting ensure-sysext.service... Sep 11 23:59:05.008359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:59:05.017300 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Sep 11 23:59:05.017317 systemd[1]: Reloading... Sep 11 23:59:05.025645 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 23:59:05.025670 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 23:59:05.025904 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 23:59:05.026083 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 23:59:05.027451 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 23:59:05.027785 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 11 23:59:05.027895 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 11 23:59:05.031205 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:59:05.031309 systemd-tmpfiles[1282]: Skipping /boot Sep 11 23:59:05.041301 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:59:05.041497 systemd-tmpfiles[1282]: Skipping /boot Sep 11 23:59:05.068293 zram_generator::config[1310]: No configuration found. Sep 11 23:59:05.137914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:59:05.200000 systemd[1]: Reloading finished in 182 ms. Sep 11 23:59:05.221788 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 23:59:05.227286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:59:05.238222 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:59:05.240557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 23:59:05.242609 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 23:59:05.245153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:59:05.249964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:59:05.254192 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 23:59:05.259358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:59:05.264483 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:59:05.266832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:59:05.269424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:59:05.270280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:59:05.270386 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:59:05.271966 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 23:59:05.273723 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 23:59:05.280087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:59:05.280531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:59:05.280695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:59:05.286651 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 23:59:05.288811 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 11 23:59:05.289503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:59:05.289716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:59:05.291511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:59:05.291722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:59:05.293356 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:59:05.297354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:59:05.302690 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 23:59:05.303497 augenrules[1382]: No rules Sep 11 23:59:05.307412 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:59:05.307686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:59:05.311307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 23:59:05.313702 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 23:59:05.315122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:59:05.319630 systemd[1]: Finished ensure-sysext.service. Sep 11 23:59:05.322617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:59:05.324487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:59:05.326511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:59:05.330636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:59:05.336346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:59:05.337431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:59:05.337475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:59:05.351367 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:59:05.359270 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 23:59:05.360386 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 23:59:05.364391 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 23:59:05.366600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:59:05.367669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:59:05.369520 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:59:05.369700 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:59:05.371511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:59:05.371705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:59:05.375472 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:59:05.375651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:59:05.384429 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 23:59:05.385583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:59:05.385639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:59:05.445140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:59:05.448036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 23:59:05.477847 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 23:59:05.504524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:59:05.540674 systemd-resolved[1348]: Positive Trust Anchors: Sep 11 23:59:05.540691 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:59:05.540723 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:59:05.546528 systemd-resolved[1348]: Defaulting to hostname 'linux'. Sep 11 23:59:05.548714 systemd-networkd[1427]: lo: Link UP Sep 11 23:59:05.548721 systemd-networkd[1427]: lo: Gained carrier Sep 11 23:59:05.549585 systemd-networkd[1427]: Enumeration completed Sep 11 23:59:05.549666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:59:05.550018 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:59:05.550021 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:59:05.550577 systemd-networkd[1427]: eth0: Link UP Sep 11 23:59:05.550682 systemd-networkd[1427]: eth0: Gained carrier Sep 11 23:59:05.550696 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:59:05.550992 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:59:05.552261 systemd[1]: Reached target network.target - Network. Sep 11 23:59:05.552895 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:59:05.557308 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 23:59:05.562226 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 23:59:05.563464 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 23:59:05.565256 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 23:59:05.567272 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:59:05.567783 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Sep 11 23:59:05.981218 systemd-resolved[1348]: Clock change detected. Flushing caches. Sep 11 23:59:05.981255 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 23:59:05.981297 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-09-11 23:59:05.981175 UTC. Sep 11 23:59:05.992486 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 23:59:06.001488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:59:06.002652 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:59:06.003624 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 23:59:06.004618 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 23:59:06.005801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 23:59:06.006635 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 23:59:06.007633 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 23:59:06.008604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 23:59:06.008633 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:59:06.009334 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:59:06.011086 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 23:59:06.013158 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 23:59:06.016018 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 23:59:06.017209 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 23:59:06.018169 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 23:59:06.023693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 23:59:06.025009 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 23:59:06.026494 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 23:59:06.027490 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:59:06.028250 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:59:06.028978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:59:06.029011 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:59:06.030053 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 23:59:06.031897 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 23:59:06.033543 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 23:59:06.035397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 23:59:06.037107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 23:59:06.037876 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 23:59:06.039866 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 23:59:06.043141 jq[1477]: false Sep 11 23:59:06.041463 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 23:59:06.043217 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 23:59:06.047612 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 23:59:06.052914 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 23:59:06.054577 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 23:59:06.055054 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 23:59:06.055743 extend-filesystems[1478]: Found /dev/vda6 Sep 11 23:59:06.055936 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 23:59:06.058971 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 23:59:06.059988 extend-filesystems[1478]: Found /dev/vda9 Sep 11 23:59:06.061710 extend-filesystems[1478]: Checking size of /dev/vda9 Sep 11 23:59:06.065795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 23:59:06.067895 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 23:59:06.068082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 23:59:06.068316 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 23:59:06.068482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 23:59:06.071528 jq[1495]: true Sep 11 23:59:06.071168 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 23:59:06.073785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 23:59:06.089015 update_engine[1492]: I20250911 23:59:06.086805 1492 main.cc:92] Flatcar Update Engine starting Sep 11 23:59:06.093947 jq[1502]: true Sep 11 23:59:06.095349 tar[1501]: linux-arm64/LICENSE Sep 11 23:59:06.095349 tar[1501]: linux-arm64/helm Sep 11 23:59:06.097723 extend-filesystems[1478]: Resized partition /dev/vda9 Sep 11 23:59:06.100806 extend-filesystems[1517]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 23:59:06.106912 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 23:59:06.114712 dbus-daemon[1475]: [system] SELinux support is enabled Sep 11 23:59:06.116586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 23:59:06.117181 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 23:59:06.120320 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 23:59:06.120834 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 23:59:06.122608 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 23:59:06.127626 update_engine[1492]: I20250911 23:59:06.123879 1492 update_check_scheduler.cc:74] Next update check in 9m53s Sep 11 23:59:06.122630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 23:59:06.124989 systemd[1]: Started update-engine.service - Update Engine. Sep 11 23:59:06.134947 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 23:59:06.143772 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 23:59:06.160776 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 23:59:06.160776 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 23:59:06.160776 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 23:59:06.176927 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Sep 11 23:59:06.162522 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 23:59:06.180478 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Sep 11 23:59:06.162764 systemd-logind[1488]: New seat seat0. Sep 11 23:59:06.166085 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 23:59:06.167803 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 23:59:06.171326 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 23:59:06.184789 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 23:59:06.186689 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 23:59:06.210950 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 23:59:06.295743 containerd[1512]: time="2025-09-11T23:59:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 23:59:06.296058 containerd[1512]: time="2025-09-11T23:59:06.296020806Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 23:59:06.305129 containerd[1512]: time="2025-09-11T23:59:06.305081566Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12µs" Sep 11 23:59:06.305129 containerd[1512]: time="2025-09-11T23:59:06.305118646Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 23:59:06.305129 containerd[1512]: time="2025-09-11T23:59:06.305136406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305303886Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305326126Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305349886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305410326Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305422926Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305668366Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305683846Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305694646Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305702686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.305795166Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306762 containerd[1512]: time="2025-09-11T23:59:06.306005726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306971 containerd[1512]: time="2025-09-11T23:59:06.306034206Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:59:06.306971 containerd[1512]: time="2025-09-11T23:59:06.306043846Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 23:59:06.306971 containerd[1512]: time="2025-09-11T23:59:06.306088406Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 23:59:06.306971 containerd[1512]: time="2025-09-11T23:59:06.306303686Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 23:59:06.306971 containerd[1512]: time="2025-09-11T23:59:06.306371486Z" level=info msg="metadata content store policy set" policy=shared Sep 11 23:59:06.359466 containerd[1512]: time="2025-09-11T23:59:06.359402806Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 23:59:06.359562 containerd[1512]: time="2025-09-11T23:59:06.359527766Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 23:59:06.359562 containerd[1512]: time="2025-09-11T23:59:06.359547006Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 23:59:06.359623 containerd[1512]: time="2025-09-11T23:59:06.359561046Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 23:59:06.359623 containerd[1512]: time="2025-09-11T23:59:06.359585246Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 23:59:06.359623 containerd[1512]: time="2025-09-11T23:59:06.359606166Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 23:59:06.359623 containerd[1512]: time="2025-09-11T23:59:06.359618646Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 23:59:06.359688 containerd[1512]: time="2025-09-11T23:59:06.359631406Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 23:59:06.359688 containerd[1512]: time="2025-09-11T23:59:06.359643566Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 23:59:06.359688 containerd[1512]: time="2025-09-11T23:59:06.359655806Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 23:59:06.359688 containerd[1512]: time="2025-09-11T23:59:06.359666606Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 23:59:06.359688 containerd[1512]: time="2025-09-11T23:59:06.359679646Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 23:59:06.359865 containerd[1512]: time="2025-09-11T23:59:06.359845166Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 23:59:06.359890 containerd[1512]: time="2025-09-11T23:59:06.359872246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 23:59:06.359907 containerd[1512]: time="2025-09-11T23:59:06.359888286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 23:59:06.359907 containerd[1512]: time="2025-09-11T23:59:06.359900366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 23:59:06.359939 containerd[1512]: time="2025-09-11T23:59:06.359910966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 23:59:06.359939 containerd[1512]: time="2025-09-11T23:59:06.359921606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 23:59:06.359939 containerd[1512]: time="2025-09-11T23:59:06.359933966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 23:59:06.359991 containerd[1512]: time="2025-09-11T23:59:06.359944046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 23:59:06.359991 containerd[1512]: time="2025-09-11T23:59:06.359955126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 23:59:06.359991 containerd[1512]: time="2025-09-11T23:59:06.359971126Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 23:59:06.359991 containerd[1512]: time="2025-09-11T23:59:06.359981886Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 23:59:06.360199 containerd[1512]: time="2025-09-11T23:59:06.360180646Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 23:59:06.360226 containerd[1512]: time="2025-09-11T23:59:06.360200766Z" level=info msg="Start snapshots syncer" Sep 11 23:59:06.360244 containerd[1512]: time="2025-09-11T23:59:06.360226526Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 23:59:06.360501 containerd[1512]: time="2025-09-11T23:59:06.360461166Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 23:59:06.360587 containerd[1512]: time="2025-09-11T23:59:06.360514046Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 23:59:06.360910 containerd[1512]: time="2025-09-11T23:59:06.360885286Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 23:59:06.361099 containerd[1512]: time="2025-09-11T23:59:06.361075806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 23:59:06.361126 containerd[1512]: time="2025-09-11T23:59:06.361106366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 23:59:06.361154 containerd[1512]: time="2025-09-11T23:59:06.361138206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 23:59:06.361176 containerd[1512]: time="2025-09-11T23:59:06.361155406Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 23:59:06.361176 containerd[1512]: time="2025-09-11T23:59:06.361168366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 23:59:06.361220 containerd[1512]: time="2025-09-11T23:59:06.361179446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 23:59:06.361220 containerd[1512]: time="2025-09-11T23:59:06.361200286Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 23:59:06.361253 containerd[1512]: time="2025-09-11T23:59:06.361232766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 23:59:06.361253 containerd[1512]: time="2025-09-11T23:59:06.361246606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 23:59:06.361285 containerd[1512]: time="2025-09-11T23:59:06.361257966Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 23:59:06.361320 containerd[1512]: time="2025-09-11T23:59:06.361305046Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:59:06.361346 containerd[1512]: time="2025-09-11T23:59:06.361324646Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:59:06.361346 containerd[1512]: time="2025-09-11T23:59:06.361334086Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:59:06.361397 containerd[1512]: time="2025-09-11T23:59:06.361344686Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:59:06.361397 containerd[1512]: time="2025-09-11T23:59:06.361369286Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 23:59:06.361732 containerd[1512]: time="2025-09-11T23:59:06.361388486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 23:59:06.361769 containerd[1512]: time="2025-09-11T23:59:06.361738326Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 23:59:06.361912 containerd[1512]: time="2025-09-11T23:59:06.361879686Z" level=info msg="runtime interface created" Sep 11 23:59:06.361912 containerd[1512]: time="2025-09-11T23:59:06.361909366Z" level=info msg="created NRI interface" Sep 11 23:59:06.361968 containerd[1512]: time="2025-09-11T23:59:06.361936526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 23:59:06.361968 containerd[1512]: time="2025-09-11T23:59:06.361957206Z" level=info msg="Connect containerd service" Sep 11 23:59:06.362031 containerd[1512]: time="2025-09-11T23:59:06.362013686Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 23:59:06.362887 containerd[1512]: time="2025-09-11T23:59:06.362855886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 23:59:06.443891 containerd[1512]: time="2025-09-11T23:59:06.443774526Z" level=info msg="Start subscribing containerd event" Sep 11 23:59:06.443891 containerd[1512]: time="2025-09-11T23:59:06.443857366Z" level=info msg="Start recovering state" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444063246Z" level=info msg="Start event monitor" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444095566Z" level=info msg="Start cni network conf syncer for default" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444104686Z" level=info msg="Start streaming server" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444113286Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444250326Z" level=info msg="runtime interface starting up..." Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444259846Z" level=info msg="starting plugins..." Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444276206Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444557486Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444604806Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 23:59:06.445871 containerd[1512]: time="2025-09-11T23:59:06.444675686Z" level=info msg="containerd successfully booted in 0.149586s" Sep 11 23:59:06.444798 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 23:59:06.520139 tar[1501]: linux-arm64/README.md Sep 11 23:59:06.535990 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 23:59:06.954141 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 23:59:06.986821 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 23:59:06.989716 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 23:59:07.012136 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 23:59:07.012379 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 23:59:07.016964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 23:59:07.038447 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 23:59:07.042356 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 23:59:07.044509 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 23:59:07.045646 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 23:59:07.807954 systemd-networkd[1427]: eth0: Gained IPv6LL Sep 11 23:59:07.810389 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 23:59:07.811858 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 23:59:07.814030 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 23:59:07.816090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:07.818007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 23:59:07.845981 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 23:59:07.848221 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 23:59:07.849809 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 23:59:07.851090 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 23:59:08.374743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:08.376004 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 23:59:08.379480 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:59:08.381096 systemd[1]: Startup finished in 2.000s (kernel) + 5.308s (initrd) + 4.110s (userspace) = 11.420s. Sep 11 23:59:08.738568 kubelet[1606]: E0911 23:59:08.738466 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:59:08.741226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:59:08.741371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:59:08.741707 systemd[1]: kubelet.service: Consumed 764ms CPU time, 259.7M memory peak. Sep 11 23:59:11.414991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 23:59:11.418025 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:53534.service - OpenSSH per-connection server daemon (10.0.0.1:53534). Sep 11 23:59:11.570471 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 53534 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:11.572442 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:11.578693 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 23:59:11.582622 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 23:59:11.592473 systemd-logind[1488]: New session 1 of user core. Sep 11 23:59:11.617627 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 23:59:11.627882 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 23:59:11.647002 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 23:59:11.649433 systemd-logind[1488]: New session c1 of user core. Sep 11 23:59:11.777529 systemd[1623]: Queued start job for default target default.target. Sep 11 23:59:11.792691 systemd[1623]: Created slice app.slice - User Application Slice. Sep 11 23:59:11.792722 systemd[1623]: Reached target paths.target - Paths. Sep 11 23:59:11.792782 systemd[1623]: Reached target timers.target - Timers. Sep 11 23:59:11.794095 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 23:59:11.803903 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 23:59:11.803970 systemd[1623]: Reached target sockets.target - Sockets. Sep 11 23:59:11.804015 systemd[1623]: Reached target basic.target - Basic System. Sep 11 23:59:11.804045 systemd[1623]: Reached target default.target - Main User Target. Sep 11 23:59:11.804071 systemd[1623]: Startup finished in 146ms. Sep 11 23:59:11.804166 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 23:59:11.805455 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 23:59:11.864981 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:53548.service - OpenSSH per-connection server daemon (10.0.0.1:53548). Sep 11 23:59:11.910395 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 53548 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:11.911619 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:11.915784 systemd-logind[1488]: New session 2 of user core. Sep 11 23:59:11.929943 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 23:59:11.982093 sshd[1636]: Connection closed by 10.0.0.1 port 53548 Sep 11 23:59:11.982971 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:11.993720 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:53548.service: Deactivated successfully. Sep 11 23:59:11.997040 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 23:59:11.997827 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Sep 11 23:59:12.000397 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:53564.service - OpenSSH per-connection server daemon (10.0.0.1:53564). Sep 11 23:59:12.002180 systemd-logind[1488]: Removed session 2. Sep 11 23:59:12.069438 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 53564 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:12.070683 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:12.075206 systemd-logind[1488]: New session 3 of user core. Sep 11 23:59:12.085915 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 23:59:12.134956 sshd[1644]: Connection closed by 10.0.0.1 port 53564 Sep 11 23:59:12.135337 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:12.149680 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:53564.service: Deactivated successfully. Sep 11 23:59:12.152588 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 23:59:12.154942 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Sep 11 23:59:12.160664 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:53566.service - OpenSSH per-connection server daemon (10.0.0.1:53566). Sep 11 23:59:12.161443 systemd-logind[1488]: Removed session 3. Sep 11 23:59:12.211889 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 53566 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:12.216194 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:12.221588 systemd-logind[1488]: New session 4 of user core. Sep 11 23:59:12.236884 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 23:59:12.290481 sshd[1652]: Connection closed by 10.0.0.1 port 53566 Sep 11 23:59:12.290937 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:12.312691 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:53566.service: Deactivated successfully. Sep 11 23:59:12.314677 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 23:59:12.317912 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Sep 11 23:59:12.318153 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:53576.service - OpenSSH per-connection server daemon (10.0.0.1:53576). Sep 11 23:59:12.323566 systemd-logind[1488]: Removed session 4. Sep 11 23:59:12.378660 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 53576 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:12.383195 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:12.387818 systemd-logind[1488]: New session 5 of user core. Sep 11 23:59:12.394905 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 23:59:12.451227 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 23:59:12.451501 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:59:12.463319 sudo[1661]: pam_unix(sudo:session): session closed for user root Sep 11 23:59:12.464717 sshd[1660]: Connection closed by 10.0.0.1 port 53576 Sep 11 23:59:12.465267 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:12.482864 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:53576.service: Deactivated successfully. Sep 11 23:59:12.484285 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 23:59:12.485054 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Sep 11 23:59:12.488594 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:53582.service - OpenSSH per-connection server daemon (10.0.0.1:53582). Sep 11 23:59:12.489402 systemd-logind[1488]: Removed session 5. Sep 11 23:59:12.540621 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 53582 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:12.541987 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:12.546105 systemd-logind[1488]: New session 6 of user core. Sep 11 23:59:12.561964 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 23:59:12.615621 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 23:59:12.615912 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:59:12.621831 sudo[1671]: pam_unix(sudo:session): session closed for user root Sep 11 23:59:12.626277 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 23:59:12.626972 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:59:12.646744 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:59:12.683701 augenrules[1693]: No rules Sep 11 23:59:12.684905 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:59:12.685132 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:59:12.686952 sudo[1670]: pam_unix(sudo:session): session closed for user root Sep 11 23:59:12.690208 sshd[1669]: Connection closed by 10.0.0.1 port 53582 Sep 11 23:59:12.689976 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:12.701157 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:53582.service: Deactivated successfully. Sep 11 23:59:12.705273 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 23:59:12.706255 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Sep 11 23:59:12.710982 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:53586.service - OpenSSH per-connection server daemon (10.0.0.1:53586). Sep 11 23:59:12.711716 systemd-logind[1488]: Removed session 6. Sep 11 23:59:12.777432 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 53586 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:59:12.778235 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:59:12.784519 systemd-logind[1488]: New session 7 of user core. Sep 11 23:59:12.797949 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 23:59:12.852113 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 23:59:12.852389 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:59:13.169510 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 23:59:13.184252 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 23:59:13.416763 dockerd[1725]: time="2025-09-11T23:59:13.416688806Z" level=info msg="Starting up" Sep 11 23:59:13.418790 dockerd[1725]: time="2025-09-11T23:59:13.417997406Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 23:59:13.603163 dockerd[1725]: time="2025-09-11T23:59:13.602633206Z" level=info msg="Loading containers: start." Sep 11 23:59:13.610770 kernel: Initializing XFRM netlink socket Sep 11 23:59:13.809954 systemd-networkd[1427]: docker0: Link UP Sep 11 23:59:13.814027 dockerd[1725]: time="2025-09-11T23:59:13.813982686Z" level=info msg="Loading containers: done." Sep 11 23:59:13.829925 dockerd[1725]: time="2025-09-11T23:59:13.829874886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 23:59:13.830065 dockerd[1725]: time="2025-09-11T23:59:13.829960526Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 23:59:13.830065 dockerd[1725]: time="2025-09-11T23:59:13.830056566Z" level=info msg="Initializing buildkit" Sep 11 23:59:13.853017 dockerd[1725]: time="2025-09-11T23:59:13.852977326Z" level=info msg="Completed buildkit initialization" Sep 11 23:59:13.860728 dockerd[1725]: time="2025-09-11T23:59:13.860446366Z" level=info msg="Daemon has completed initialization" Sep 11 23:59:13.860728 dockerd[1725]: time="2025-09-11T23:59:13.860520926Z" level=info msg="API listen on /run/docker.sock" Sep 11 23:59:13.860843 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 23:59:14.439028 containerd[1512]: time="2025-09-11T23:59:14.438993166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 23:59:14.522104 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2907966071-merged.mount: Deactivated successfully. Sep 11 23:59:15.038475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606319708.mount: Deactivated successfully. Sep 11 23:59:16.170268 containerd[1512]: time="2025-09-11T23:59:16.170200646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:16.170777 containerd[1512]: time="2025-09-11T23:59:16.170734006Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 11 23:59:16.171617 containerd[1512]: time="2025-09-11T23:59:16.171550846Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:16.174588 containerd[1512]: time="2025-09-11T23:59:16.174547446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:16.176048 containerd[1512]: time="2025-09-11T23:59:16.176008766Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.7369766s" Sep 11 23:59:16.176048 containerd[1512]: time="2025-09-11T23:59:16.176042526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 11 23:59:16.177462 containerd[1512]: time="2025-09-11T23:59:16.177419006Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 23:59:17.222148 containerd[1512]: time="2025-09-11T23:59:17.222103646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:17.223429 containerd[1512]: time="2025-09-11T23:59:17.223357286Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 11 23:59:17.224244 containerd[1512]: time="2025-09-11T23:59:17.224215046Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:17.227245 containerd[1512]: time="2025-09-11T23:59:17.227205966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:17.227851 containerd[1512]: time="2025-09-11T23:59:17.227820646Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.0503636s" Sep 11 23:59:17.227887 containerd[1512]: time="2025-09-11T23:59:17.227863286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 11 23:59:17.228361 containerd[1512]: time="2025-09-11T23:59:17.228318006Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 23:59:18.322068 containerd[1512]: time="2025-09-11T23:59:18.321995126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:18.322491 containerd[1512]: time="2025-09-11T23:59:18.322466406Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 11 23:59:18.323182 containerd[1512]: time="2025-09-11T23:59:18.323157326Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:18.326780 containerd[1512]: time="2025-09-11T23:59:18.326652606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:18.327954 containerd[1512]: time="2025-09-11T23:59:18.327921006Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.09956088s" Sep 11 23:59:18.328096 containerd[1512]: time="2025-09-11T23:59:18.328053126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 11 23:59:18.328580 containerd[1512]: time="2025-09-11T23:59:18.328527006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 23:59:18.871111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 23:59:18.874092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:19.023545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:19.026687 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:59:19.095325 kubelet[2013]: E0911 23:59:19.095264 2013 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:59:19.101014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:59:19.101126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:59:19.102829 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.9M memory peak. Sep 11 23:59:19.356193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603686151.mount: Deactivated successfully. Sep 11 23:59:19.728441 containerd[1512]: time="2025-09-11T23:59:19.728385966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:19.728909 containerd[1512]: time="2025-09-11T23:59:19.728863206Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 11 23:59:19.730080 containerd[1512]: time="2025-09-11T23:59:19.730039846Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:19.731738 containerd[1512]: time="2025-09-11T23:59:19.731698926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:19.732257 containerd[1512]: time="2025-09-11T23:59:19.732231166Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.40367636s" Sep 11 23:59:19.732294 containerd[1512]: time="2025-09-11T23:59:19.732261966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 11 23:59:19.732910 containerd[1512]: time="2025-09-11T23:59:19.732889006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 23:59:20.278589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858985946.mount: Deactivated successfully. Sep 11 23:59:21.052680 containerd[1512]: time="2025-09-11T23:59:21.052608726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:21.053339 containerd[1512]: time="2025-09-11T23:59:21.053300766Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 11 23:59:21.055599 containerd[1512]: time="2025-09-11T23:59:21.055557086Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:21.057762 containerd[1512]: time="2025-09-11T23:59:21.057721526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:21.058918 containerd[1512]: time="2025-09-11T23:59:21.058885766Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.3259672s" Sep 11 23:59:21.058918 containerd[1512]: time="2025-09-11T23:59:21.058917846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 11 23:59:21.059327 containerd[1512]: time="2025-09-11T23:59:21.059294286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 23:59:21.489876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480119091.mount: Deactivated successfully. Sep 11 23:59:21.493709 containerd[1512]: time="2025-09-11T23:59:21.493669766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:59:21.494199 containerd[1512]: time="2025-09-11T23:59:21.494122086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 23:59:21.495272 containerd[1512]: time="2025-09-11T23:59:21.495030766Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:59:21.497305 containerd[1512]: time="2025-09-11T23:59:21.497278766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:59:21.498437 containerd[1512]: time="2025-09-11T23:59:21.498411846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 439.02276ms" Sep 11 23:59:21.498529 containerd[1512]: time="2025-09-11T23:59:21.498514086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 23:59:21.499252 containerd[1512]: time="2025-09-11T23:59:21.499089566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 23:59:21.932292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337338672.mount: Deactivated successfully. Sep 11 23:59:23.634996 containerd[1512]: time="2025-09-11T23:59:23.634934646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:23.636474 containerd[1512]: time="2025-09-11T23:59:23.636441526Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 11 23:59:23.637426 containerd[1512]: time="2025-09-11T23:59:23.637394446Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:23.640339 containerd[1512]: time="2025-09-11T23:59:23.640285326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:23.641436 containerd[1512]: time="2025-09-11T23:59:23.641394486Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.14227388s" Sep 11 23:59:23.641494 containerd[1512]: time="2025-09-11T23:59:23.641436606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 11 23:59:27.847822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:27.847970 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.9M memory peak. Sep 11 23:59:27.849880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:27.870834 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... Sep 11 23:59:27.870851 systemd[1]: Reloading... Sep 11 23:59:27.947788 zram_generator::config[2216]: No configuration found. Sep 11 23:59:28.081106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:59:28.170474 systemd[1]: Reloading finished in 299 ms. Sep 11 23:59:28.240209 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 23:59:28.240290 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 23:59:28.240512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:28.240558 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.1M memory peak. Sep 11 23:59:28.242133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:28.349020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:28.365085 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:59:28.397643 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:59:28.397643 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 23:59:28.397643 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:59:28.397984 kubelet[2258]: I0911 23:59:28.397686 2258 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:59:28.894209 kubelet[2258]: I0911 23:59:28.894160 2258 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 23:59:28.894209 kubelet[2258]: I0911 23:59:28.894198 2258 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:59:28.894434 kubelet[2258]: I0911 23:59:28.894411 2258 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 23:59:28.909563 kubelet[2258]: E0911 23:59:28.909518 2258 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 23:59:28.910958 kubelet[2258]: I0911 23:59:28.910910 2258 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:59:28.918445 kubelet[2258]: I0911 23:59:28.918392 2258 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:59:28.920979 kubelet[2258]: I0911 23:59:28.920960 2258 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:59:28.921234 kubelet[2258]: I0911 23:59:28.921214 2258 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:59:28.921409 kubelet[2258]: I0911 23:59:28.921235 2258 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:59:28.921494 kubelet[2258]: I0911 23:59:28.921476 2258 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:59:28.921494 kubelet[2258]: I0911 23:59:28.921484 2258 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 23:59:28.922217 kubelet[2258]: I0911 23:59:28.922183 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:59:28.924630 kubelet[2258]: I0911 23:59:28.924610 2258 kubelet.go:480] "Attempting to sync node with API server" Sep 11 23:59:28.924677 kubelet[2258]: I0911 23:59:28.924641 2258 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:59:28.924677 kubelet[2258]: I0911 23:59:28.924665 2258 kubelet.go:386] "Adding apiserver pod source" Sep 11 23:59:28.924729 kubelet[2258]: I0911 23:59:28.924680 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:59:28.926101 kubelet[2258]: I0911 23:59:28.926079 2258 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 23:59:28.927104 kubelet[2258]: I0911 23:59:28.926848 2258 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 23:59:28.927104 kubelet[2258]: W0911 23:59:28.926968 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 23:59:28.927236 kubelet[2258]: E0911 23:59:28.927175 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 23:59:28.928502 kubelet[2258]: E0911 23:59:28.928472 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 23:59:28.930141 kubelet[2258]: I0911 23:59:28.930109 2258 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 23:59:28.930210 kubelet[2258]: I0911 23:59:28.930157 2258 server.go:1289] "Started kubelet" Sep 11 23:59:28.930305 kubelet[2258]: I0911 23:59:28.930268 2258 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:59:28.931464 kubelet[2258]: I0911 23:59:28.931441 2258 server.go:317] "Adding debug handlers to kubelet server" Sep 11 23:59:28.932593 kubelet[2258]: I0911 23:59:28.932184 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:59:28.932684 kubelet[2258]: I0911 23:59:28.932658 2258 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:59:28.933978 kubelet[2258]: I0911 23:59:28.933953 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:59:28.934188 kubelet[2258]: I0911 23:59:28.934168 2258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:59:28.935706 kubelet[2258]: E0911 23:59:28.934920 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:59:28.935706 kubelet[2258]: I0911 23:59:28.934947 2258 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 23:59:28.935706 kubelet[2258]: I0911 23:59:28.935106 2258 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 23:59:28.935706 kubelet[2258]: I0911 23:59:28.935154 2258 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:59:28.935706 kubelet[2258]: E0911 23:59:28.935501 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 23:59:28.935706 kubelet[2258]: E0911 23:59:28.935593 2258 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:59:28.935706 kubelet[2258]: E0911 23:59:28.935695 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Sep 11 23:59:28.938116 kubelet[2258]: I0911 23:59:28.938072 2258 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:59:28.938191 kubelet[2258]: E0911 23:59:28.932892 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18645fdfb0d06a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 23:59:28.930130526 +0000 UTC m=+0.561671241,LastTimestamp:2025-09-11 23:59:28.930130526 +0000 UTC m=+0.561671241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 23:59:28.939310 kubelet[2258]: I0911 23:59:28.939271 2258 factory.go:223] Registration of the containerd container factory successfully Sep 11 23:59:28.939310 kubelet[2258]: I0911 23:59:28.939304 2258 factory.go:223] Registration of the systemd container factory successfully Sep 11 23:59:28.949025 kubelet[2258]: I0911 23:59:28.949003 2258 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 23:59:28.949025 kubelet[2258]: I0911 23:59:28.949020 2258 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 23:59:28.949121 kubelet[2258]: I0911 23:59:28.949038 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:59:28.951793 kubelet[2258]: I0911 23:59:28.951217 2258 policy_none.go:49] "None policy: Start" Sep 11 23:59:28.951793 kubelet[2258]: I0911 23:59:28.951244 2258 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 23:59:28.951793 kubelet[2258]: I0911 23:59:28.951254 2258 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:59:28.954715 kubelet[2258]: I0911 23:59:28.954670 2258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 23:59:28.955777 kubelet[2258]: I0911 23:59:28.955657 2258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 23:59:28.955777 kubelet[2258]: I0911 23:59:28.955678 2258 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 23:59:28.955777 kubelet[2258]: I0911 23:59:28.955695 2258 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 23:59:28.955777 kubelet[2258]: I0911 23:59:28.955701 2258 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 23:59:28.955777 kubelet[2258]: E0911 23:59:28.955743 2258 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:59:28.956348 kubelet[2258]: E0911 23:59:28.956320 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 23:59:28.957058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 23:59:28.969242 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 23:59:28.992985 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 23:59:28.994890 kubelet[2258]: E0911 23:59:28.994862 2258 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 23:59:28.995092 kubelet[2258]: I0911 23:59:28.995074 2258 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:59:28.995118 kubelet[2258]: I0911 23:59:28.995094 2258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:59:28.995842 kubelet[2258]: I0911 23:59:28.995712 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:59:28.996649 kubelet[2258]: E0911 23:59:28.996625 2258 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 23:59:28.996803 kubelet[2258]: E0911 23:59:28.996786 2258 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 23:59:29.082074 systemd[1]: Created slice kubepods-burstable-pod07998eb79bf10254202ffe2b11b78a6c.slice - libcontainer container kubepods-burstable-pod07998eb79bf10254202ffe2b11b78a6c.slice. Sep 11 23:59:29.097151 kubelet[2258]: I0911 23:59:29.097048 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:59:29.097598 kubelet[2258]: E0911 23:59:29.097569 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Sep 11 23:59:29.097827 kubelet[2258]: E0911 23:59:29.097809 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.136072 kubelet[2258]: E0911 23:59:29.136027 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Sep 11 23:59:29.136072 kubelet[2258]: I0911 23:59:29.136059 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:29.136175 kubelet[2258]: I0911 23:59:29.136082 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:29.136175 kubelet[2258]: I0911 23:59:29.136099 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:29.136175 kubelet[2258]: I0911 23:59:29.136112 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:29.136175 kubelet[2258]: I0911 23:59:29.136130 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:29.136175 kubelet[2258]: I0911 23:59:29.136150 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:59:29.136271 kubelet[2258]: I0911 23:59:29.136182 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:29.136271 kubelet[2258]: I0911 23:59:29.136210 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:29.136271 kubelet[2258]: I0911 23:59:29.136249 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:29.138470 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 11 23:59:29.140049 kubelet[2258]: E0911 23:59:29.139892 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.299493 kubelet[2258]: I0911 23:59:29.299404 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:59:29.299743 kubelet[2258]: E0911 23:59:29.299715 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Sep 11 23:59:29.306556 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 11 23:59:29.308465 kubelet[2258]: E0911 23:59:29.308437 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.308706 kubelet[2258]: E0911 23:59:29.308680 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.309184 containerd[1512]: time="2025-09-11T23:59:29.309152926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 11 23:59:29.389358 containerd[1512]: time="2025-09-11T23:59:29.389318166Z" level=info msg="connecting to shim 807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147" address="unix:///run/containerd/s/e61111f38edd5385a9c593dc3986b820b1642732d16f00f95560d9c6ab10f836" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:29.399186 kubelet[2258]: E0911 23:59:29.399094 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.399972 containerd[1512]: time="2025-09-11T23:59:29.399928806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07998eb79bf10254202ffe2b11b78a6c,Namespace:kube-system,Attempt:0,}" Sep 11 23:59:29.414121 systemd[1]: Started cri-containerd-807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147.scope - libcontainer container 807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147. Sep 11 23:59:29.419271 containerd[1512]: time="2025-09-11T23:59:29.419235486Z" level=info msg="connecting to shim d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a" address="unix:///run/containerd/s/50192fab351b28c57a342e315590324cbae8d71b9560324798754901e34e3704" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:29.440632 kubelet[2258]: E0911 23:59:29.440579 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.441565 containerd[1512]: time="2025-09-11T23:59:29.441450366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 11 23:59:29.443911 systemd[1]: Started cri-containerd-d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a.scope - libcontainer container d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a. Sep 11 23:59:29.455933 containerd[1512]: time="2025-09-11T23:59:29.455901046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147\"" Sep 11 23:59:29.457218 kubelet[2258]: E0911 23:59:29.457168 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.462755 containerd[1512]: time="2025-09-11T23:59:29.462470526Z" level=info msg="CreateContainer within sandbox \"807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 23:59:29.468254 containerd[1512]: time="2025-09-11T23:59:29.468224006Z" level=info msg="connecting to shim 4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0" address="unix:///run/containerd/s/990cdee2482157cd13e2687551f7b798dd52ff083c0c2528126a48960be1da4f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:29.469512 containerd[1512]: time="2025-09-11T23:59:29.469477366Z" level=info msg="Container b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:29.480900 containerd[1512]: time="2025-09-11T23:59:29.480857726Z" level=info msg="CreateContainer within sandbox \"807e0b80ca5597345d8c099330cdb046546beb51b013651496a20f6932117147\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571\"" Sep 11 23:59:29.481626 containerd[1512]: time="2025-09-11T23:59:29.481591086Z" level=info msg="StartContainer for \"b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571\"" Sep 11 23:59:29.482722 containerd[1512]: time="2025-09-11T23:59:29.482680526Z" level=info msg="connecting to shim b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571" address="unix:///run/containerd/s/e61111f38edd5385a9c593dc3986b820b1642732d16f00f95560d9c6ab10f836" protocol=ttrpc version=3 Sep 11 23:59:29.485142 containerd[1512]: time="2025-09-11T23:59:29.485108126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07998eb79bf10254202ffe2b11b78a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a\"" Sep 11 23:59:29.486951 kubelet[2258]: E0911 23:59:29.486930 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.490508 containerd[1512]: time="2025-09-11T23:59:29.490477966Z" level=info msg="CreateContainer within sandbox \"d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 23:59:29.496996 containerd[1512]: time="2025-09-11T23:59:29.496965326Z" level=info msg="Container 50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:29.501909 systemd[1]: Started cri-containerd-4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0.scope - libcontainer container 4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0. Sep 11 23:59:29.503038 containerd[1512]: time="2025-09-11T23:59:29.503002166Z" level=info msg="CreateContainer within sandbox \"d9b2ec2cdbe4c417d7108c4fcecae70b4d28c23cb81f0473b26e0984ab9b268a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a\"" Sep 11 23:59:29.503390 containerd[1512]: time="2025-09-11T23:59:29.503363446Z" level=info msg="StartContainer for \"50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a\"" Sep 11 23:59:29.504725 containerd[1512]: time="2025-09-11T23:59:29.504694606Z" level=info msg="connecting to shim 50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a" address="unix:///run/containerd/s/50192fab351b28c57a342e315590324cbae8d71b9560324798754901e34e3704" protocol=ttrpc version=3 Sep 11 23:59:29.504733 systemd[1]: Started cri-containerd-b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571.scope - libcontainer container b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571. Sep 11 23:59:29.529075 systemd[1]: Started cri-containerd-50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a.scope - libcontainer container 50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a. Sep 11 23:59:29.536984 kubelet[2258]: E0911 23:59:29.536936 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Sep 11 23:59:29.543219 containerd[1512]: time="2025-09-11T23:59:29.543181006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0\"" Sep 11 23:59:29.544733 kubelet[2258]: E0911 23:59:29.544712 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.548578 containerd[1512]: time="2025-09-11T23:59:29.548537926Z" level=info msg="CreateContainer within sandbox \"4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 23:59:29.560832 containerd[1512]: time="2025-09-11T23:59:29.560489166Z" level=info msg="StartContainer for \"b483f15cdf63e40ccee388a5ea18c7c4a1c51e6f3e3d8fc741814c549e027571\" returns successfully" Sep 11 23:59:29.560832 containerd[1512]: time="2025-09-11T23:59:29.560543046Z" level=info msg="Container 76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:29.567265 containerd[1512]: time="2025-09-11T23:59:29.567231846Z" level=info msg="CreateContainer within sandbox \"4f9d7b9471adf31c09a1fb3663933c93a55d5a44b9bf626d78aa5762679474e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9\"" Sep 11 23:59:29.568431 containerd[1512]: time="2025-09-11T23:59:29.568409926Z" level=info msg="StartContainer for \"76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9\"" Sep 11 23:59:29.569970 containerd[1512]: time="2025-09-11T23:59:29.569947926Z" level=info msg="connecting to shim 76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9" address="unix:///run/containerd/s/990cdee2482157cd13e2687551f7b798dd52ff083c0c2528126a48960be1da4f" protocol=ttrpc version=3 Sep 11 23:59:29.586345 containerd[1512]: time="2025-09-11T23:59:29.586301766Z" level=info msg="StartContainer for \"50ebc36e73deae2ed52b2f30321e9f51f6394d3c03911cb7e96eacfff455926a\" returns successfully" Sep 11 23:59:29.591896 systemd[1]: Started cri-containerd-76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9.scope - libcontainer container 76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9. Sep 11 23:59:29.630996 containerd[1512]: time="2025-09-11T23:59:29.630954246Z" level=info msg="StartContainer for \"76f198e59dde012817394f1088b76be8face7ec42c6877490de4bafa3988f2c9\" returns successfully" Sep 11 23:59:29.701860 kubelet[2258]: I0911 23:59:29.701825 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:59:29.963294 kubelet[2258]: E0911 23:59:29.963255 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.963557 kubelet[2258]: E0911 23:59:29.963418 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.967230 kubelet[2258]: E0911 23:59:29.967207 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.967339 kubelet[2258]: E0911 23:59:29.967322 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:29.968335 kubelet[2258]: E0911 23:59:29.968312 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:29.968465 kubelet[2258]: E0911 23:59:29.968448 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:30.903269 kubelet[2258]: E0911 23:59:30.903211 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 23:59:30.926031 kubelet[2258]: I0911 23:59:30.926000 2258 apiserver.go:52] "Watching apiserver" Sep 11 23:59:30.969870 kubelet[2258]: E0911 23:59:30.969838 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:30.969984 kubelet[2258]: E0911 23:59:30.969850 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 23:59:30.969984 kubelet[2258]: E0911 23:59:30.969975 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:30.970057 kubelet[2258]: E0911 23:59:30.970029 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:31.035781 kubelet[2258]: I0911 23:59:31.035738 2258 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 23:59:31.076766 kubelet[2258]: I0911 23:59:31.076137 2258 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 23:59:31.076766 kubelet[2258]: E0911 23:59:31.076173 2258 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 23:59:31.136243 kubelet[2258]: I0911 23:59:31.136150 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 23:59:31.146178 kubelet[2258]: E0911 23:59:31.146132 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 23:59:31.146178 kubelet[2258]: I0911 23:59:31.146164 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:31.148567 kubelet[2258]: E0911 23:59:31.148481 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:31.148567 kubelet[2258]: I0911 23:59:31.148506 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:31.150780 kubelet[2258]: E0911 23:59:31.150739 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:32.797422 systemd[1]: Reload requested from client PID 2545 ('systemctl') (unit session-7.scope)... Sep 11 23:59:32.797439 systemd[1]: Reloading... Sep 11 23:59:32.863841 zram_generator::config[2588]: No configuration found. Sep 11 23:59:32.932119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:59:33.029952 systemd[1]: Reloading finished in 232 ms. Sep 11 23:59:33.054932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:33.072343 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 23:59:33.072832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:33.072923 systemd[1]: kubelet.service: Consumed 892ms CPU time, 126.3M memory peak. Sep 11 23:59:33.075051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:59:33.212481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:59:33.217078 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:59:33.248847 kubelet[2630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:59:33.248847 kubelet[2630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 23:59:33.248847 kubelet[2630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:59:33.249175 kubelet[2630]: I0911 23:59:33.248885 2630 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:59:33.254868 kubelet[2630]: I0911 23:59:33.254317 2630 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 23:59:33.254868 kubelet[2630]: I0911 23:59:33.254351 2630 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:59:33.254868 kubelet[2630]: I0911 23:59:33.254714 2630 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 23:59:33.256560 kubelet[2630]: I0911 23:59:33.256529 2630 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 23:59:33.258954 kubelet[2630]: I0911 23:59:33.258872 2630 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:59:33.262673 kubelet[2630]: I0911 23:59:33.262655 2630 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:59:33.265284 kubelet[2630]: I0911 23:59:33.265243 2630 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:59:33.265479 kubelet[2630]: I0911 23:59:33.265456 2630 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:59:33.265616 kubelet[2630]: I0911 23:59:33.265482 2630 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:59:33.265692 kubelet[2630]: I0911 23:59:33.265624 2630 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:59:33.265692 kubelet[2630]: I0911 23:59:33.265633 2630 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 23:59:33.265692 kubelet[2630]: I0911 23:59:33.265673 2630 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:59:33.265860 kubelet[2630]: I0911 23:59:33.265848 2630 kubelet.go:480] "Attempting to sync node with API server" Sep 11 23:59:33.265895 kubelet[2630]: I0911 23:59:33.265867 2630 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:59:33.265895 kubelet[2630]: I0911 23:59:33.265888 2630 kubelet.go:386] "Adding apiserver pod source" Sep 11 23:59:33.265946 kubelet[2630]: I0911 23:59:33.265899 2630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:59:33.267412 kubelet[2630]: I0911 23:59:33.267230 2630 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 23:59:33.267913 kubelet[2630]: I0911 23:59:33.267888 2630 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 23:59:33.280420 kubelet[2630]: I0911 23:59:33.280383 2630 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 23:59:33.280507 kubelet[2630]: I0911 23:59:33.280482 2630 server.go:1289] "Started kubelet" Sep 11 23:59:33.280627 kubelet[2630]: I0911 23:59:33.280559 2630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:59:33.282561 kubelet[2630]: I0911 23:59:33.282048 2630 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:59:33.282561 kubelet[2630]: I0911 23:59:33.282196 2630 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:59:33.283919 kubelet[2630]: I0911 23:59:33.283889 2630 server.go:317] "Adding debug handlers to kubelet server" Sep 11 23:59:33.284234 kubelet[2630]: I0911 23:59:33.284168 2630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:59:33.285177 kubelet[2630]: I0911 23:59:33.284627 2630 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:59:33.286784 kubelet[2630]: I0911 23:59:33.285876 2630 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 23:59:33.286784 kubelet[2630]: E0911 23:59:33.286055 2630 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:59:33.286784 kubelet[2630]: I0911 23:59:33.286655 2630 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 23:59:33.286784 kubelet[2630]: I0911 23:59:33.286781 2630 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:59:33.291059 kubelet[2630]: I0911 23:59:33.291029 2630 factory.go:223] Registration of the systemd container factory successfully Sep 11 23:59:33.291149 kubelet[2630]: I0911 23:59:33.291125 2630 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:59:33.295033 kubelet[2630]: I0911 23:59:33.295004 2630 factory.go:223] Registration of the containerd container factory successfully Sep 11 23:59:33.296380 kubelet[2630]: E0911 23:59:33.296343 2630 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:59:33.319672 kubelet[2630]: I0911 23:59:33.318377 2630 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 23:59:33.320285 kubelet[2630]: I0911 23:59:33.320246 2630 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 23:59:33.320285 kubelet[2630]: I0911 23:59:33.320277 2630 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 23:59:33.320369 kubelet[2630]: I0911 23:59:33.320298 2630 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 23:59:33.320369 kubelet[2630]: I0911 23:59:33.320304 2630 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 23:59:33.320369 kubelet[2630]: E0911 23:59:33.320343 2630 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:59:33.336039 kubelet[2630]: I0911 23:59:33.336013 2630 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 23:59:33.336039 kubelet[2630]: I0911 23:59:33.336033 2630 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 23:59:33.336185 kubelet[2630]: I0911 23:59:33.336054 2630 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:59:33.336185 kubelet[2630]: I0911 23:59:33.336175 2630 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 23:59:33.336230 kubelet[2630]: I0911 23:59:33.336184 2630 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 23:59:33.336230 kubelet[2630]: I0911 23:59:33.336200 2630 policy_none.go:49] "None policy: Start" Sep 11 23:59:33.336230 kubelet[2630]: I0911 23:59:33.336208 2630 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 23:59:33.336230 kubelet[2630]: I0911 23:59:33.336216 2630 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:59:33.336317 kubelet[2630]: I0911 23:59:33.336312 2630 state_mem.go:75] "Updated machine memory state" Sep 11 23:59:33.340178 kubelet[2630]: E0911 23:59:33.340155 2630 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 23:59:33.340342 kubelet[2630]: I0911 23:59:33.340324 2630 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:59:33.340373 kubelet[2630]: I0911 23:59:33.340341 2630 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:59:33.341150 kubelet[2630]: I0911 23:59:33.341115 2630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:59:33.341792 kubelet[2630]: E0911 23:59:33.341737 2630 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 23:59:33.421988 kubelet[2630]: I0911 23:59:33.421949 2630 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 23:59:33.422415 kubelet[2630]: I0911 23:59:33.422204 2630 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:33.422652 kubelet[2630]: I0911 23:59:33.422503 2630 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.444183 kubelet[2630]: I0911 23:59:33.444157 2630 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 23:59:33.450494 kubelet[2630]: I0911 23:59:33.450469 2630 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 23:59:33.450672 kubelet[2630]: I0911 23:59:33.450650 2630 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 23:59:33.588840 kubelet[2630]: I0911 23:59:33.588571 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:33.588840 kubelet[2630]: I0911 23:59:33.588606 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:33.588840 kubelet[2630]: I0911 23:59:33.588633 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07998eb79bf10254202ffe2b11b78a6c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07998eb79bf10254202ffe2b11b78a6c\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:59:33.588840 kubelet[2630]: I0911 23:59:33.588651 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.588840 kubelet[2630]: I0911 23:59:33.588668 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.589029 kubelet[2630]: I0911 23:59:33.588685 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.589029 kubelet[2630]: I0911 23:59:33.588698 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.589029 kubelet[2630]: I0911 23:59:33.588712 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:59:33.589029 kubelet[2630]: I0911 23:59:33.588728 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:59:33.727799 kubelet[2630]: E0911 23:59:33.727740 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:33.728122 kubelet[2630]: E0911 23:59:33.727898 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:33.728122 kubelet[2630]: E0911 23:59:33.728017 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:34.267722 kubelet[2630]: I0911 23:59:34.267660 2630 apiserver.go:52] "Watching apiserver" Sep 11 23:59:34.287215 kubelet[2630]: I0911 23:59:34.287125 2630 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 23:59:34.334666 kubelet[2630]: E0911 23:59:34.334634 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:34.335105 kubelet[2630]: E0911 23:59:34.335084 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:34.335325 kubelet[2630]: E0911 23:59:34.335304 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:34.366064 kubelet[2630]: I0911 23:59:34.365991 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.365976406 podStartE2EDuration="1.365976406s" podCreationTimestamp="2025-09-11 23:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:59:34.365701286 +0000 UTC m=+1.145102121" watchObservedRunningTime="2025-09-11 23:59:34.365976406 +0000 UTC m=+1.145377201" Sep 11 23:59:34.366219 kubelet[2630]: I0911 23:59:34.366098 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.366093206 podStartE2EDuration="1.366093206s" podCreationTimestamp="2025-09-11 23:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:59:34.357188886 +0000 UTC m=+1.136589761" watchObservedRunningTime="2025-09-11 23:59:34.366093206 +0000 UTC m=+1.145494001" Sep 11 23:59:34.382249 kubelet[2630]: I0911 23:59:34.382173 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.382159566 podStartE2EDuration="1.382159566s" podCreationTimestamp="2025-09-11 23:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:59:34.374059966 +0000 UTC m=+1.153460761" watchObservedRunningTime="2025-09-11 23:59:34.382159566 +0000 UTC m=+1.161560401" Sep 11 23:59:35.335682 kubelet[2630]: E0911 23:59:35.335595 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:35.336038 kubelet[2630]: E0911 23:59:35.335725 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:36.410290 kubelet[2630]: E0911 23:59:36.410160 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:37.564329 kubelet[2630]: E0911 23:59:37.564290 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:40.604567 kubelet[2630]: I0911 23:59:40.604535 2630 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 23:59:40.606378 containerd[1512]: time="2025-09-11T23:59:40.605954150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 23:59:40.606603 kubelet[2630]: I0911 23:59:40.606179 2630 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 23:59:41.262758 kubelet[2630]: E0911 23:59:41.262051 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:41.345387 kubelet[2630]: E0911 23:59:41.345266 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:41.726390 systemd[1]: Created slice kubepods-besteffort-pod799f2ab4_8f29_4caf_b821_04d0abf75ce3.slice - libcontainer container kubepods-besteffort-pod799f2ab4_8f29_4caf_b821_04d0abf75ce3.slice. Sep 11 23:59:41.746338 kubelet[2630]: I0911 23:59:41.746297 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/799f2ab4-8f29-4caf-b821-04d0abf75ce3-kube-proxy\") pod \"kube-proxy-pt546\" (UID: \"799f2ab4-8f29-4caf-b821-04d0abf75ce3\") " pod="kube-system/kube-proxy-pt546" Sep 11 23:59:41.746338 kubelet[2630]: I0911 23:59:41.746335 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/799f2ab4-8f29-4caf-b821-04d0abf75ce3-xtables-lock\") pod \"kube-proxy-pt546\" (UID: \"799f2ab4-8f29-4caf-b821-04d0abf75ce3\") " pod="kube-system/kube-proxy-pt546" Sep 11 23:59:41.746658 kubelet[2630]: I0911 23:59:41.746356 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/799f2ab4-8f29-4caf-b821-04d0abf75ce3-lib-modules\") pod \"kube-proxy-pt546\" (UID: \"799f2ab4-8f29-4caf-b821-04d0abf75ce3\") " pod="kube-system/kube-proxy-pt546" Sep 11 23:59:41.746658 kubelet[2630]: I0911 23:59:41.746391 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8scqg\" (UniqueName: \"kubernetes.io/projected/799f2ab4-8f29-4caf-b821-04d0abf75ce3-kube-api-access-8scqg\") pod \"kube-proxy-pt546\" (UID: \"799f2ab4-8f29-4caf-b821-04d0abf75ce3\") " pod="kube-system/kube-proxy-pt546" Sep 11 23:59:41.792851 systemd[1]: Created slice kubepods-besteffort-podf131b582_6d54_4b11_8913_907698ea57d8.slice - libcontainer container kubepods-besteffort-podf131b582_6d54_4b11_8913_907698ea57d8.slice. Sep 11 23:59:41.847523 kubelet[2630]: I0911 23:59:41.847257 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzchn\" (UniqueName: \"kubernetes.io/projected/f131b582-6d54-4b11-8913-907698ea57d8-kube-api-access-bzchn\") pod \"tigera-operator-755d956888-kz6rk\" (UID: \"f131b582-6d54-4b11-8913-907698ea57d8\") " pod="tigera-operator/tigera-operator-755d956888-kz6rk" Sep 11 23:59:41.847523 kubelet[2630]: I0911 23:59:41.847378 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f131b582-6d54-4b11-8913-907698ea57d8-var-lib-calico\") pod \"tigera-operator-755d956888-kz6rk\" (UID: \"f131b582-6d54-4b11-8913-907698ea57d8\") " pod="tigera-operator/tigera-operator-755d956888-kz6rk" Sep 11 23:59:42.041227 kubelet[2630]: E0911 23:59:42.040846 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:42.042001 containerd[1512]: time="2025-09-11T23:59:42.041946624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pt546,Uid:799f2ab4-8f29-4caf-b821-04d0abf75ce3,Namespace:kube-system,Attempt:0,}" Sep 11 23:59:42.059925 containerd[1512]: time="2025-09-11T23:59:42.059888820Z" level=info msg="connecting to shim 8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13" address="unix:///run/containerd/s/231c6329fb6a51fce390b83f1b053279e879fd7d482c6042b081e2dff72dc765" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:42.081905 systemd[1]: Started cri-containerd-8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13.scope - libcontainer container 8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13. Sep 11 23:59:42.096347 containerd[1512]: time="2025-09-11T23:59:42.096308576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kz6rk,Uid:f131b582-6d54-4b11-8913-907698ea57d8,Namespace:tigera-operator,Attempt:0,}" Sep 11 23:59:42.104188 containerd[1512]: time="2025-09-11T23:59:42.104097329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pt546,Uid:799f2ab4-8f29-4caf-b821-04d0abf75ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13\"" Sep 11 23:59:42.104931 kubelet[2630]: E0911 23:59:42.104909 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:42.112714 containerd[1512]: time="2025-09-11T23:59:42.112173003Z" level=info msg="CreateContainer within sandbox \"8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 23:59:42.121821 containerd[1512]: time="2025-09-11T23:59:42.121788084Z" level=info msg="Container 967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:42.123414 containerd[1512]: time="2025-09-11T23:59:42.123380691Z" level=info msg="connecting to shim 8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e" address="unix:///run/containerd/s/e76842a9280903bf669d9de2e4f1ddbaebb8610063f9b74cf8e9916e5a035e59" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:42.129015 containerd[1512]: time="2025-09-11T23:59:42.128980395Z" level=info msg="CreateContainer within sandbox \"8c98d493dc7bfc9026feecda509f3e2e4cecd2df9d275e8d2a6c078b37f44d13\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a\"" Sep 11 23:59:42.129736 containerd[1512]: time="2025-09-11T23:59:42.129706238Z" level=info msg="StartContainer for \"967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a\"" Sep 11 23:59:42.132581 containerd[1512]: time="2025-09-11T23:59:42.132554930Z" level=info msg="connecting to shim 967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a" address="unix:///run/containerd/s/231c6329fb6a51fce390b83f1b053279e879fd7d482c6042b081e2dff72dc765" protocol=ttrpc version=3 Sep 11 23:59:42.147906 systemd[1]: Started cri-containerd-8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e.scope - libcontainer container 8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e. Sep 11 23:59:42.151491 systemd[1]: Started cri-containerd-967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a.scope - libcontainer container 967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a. Sep 11 23:59:42.187827 containerd[1512]: time="2025-09-11T23:59:42.187785765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-kz6rk,Uid:f131b582-6d54-4b11-8913-907698ea57d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e\"" Sep 11 23:59:42.190471 containerd[1512]: time="2025-09-11T23:59:42.190427017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 23:59:42.196042 containerd[1512]: time="2025-09-11T23:59:42.196015480Z" level=info msg="StartContainer for \"967d2f7c9b1f1c651ac858c226cd427fdabb70c7b0fe22072e623ff4e022d81a\" returns successfully" Sep 11 23:59:42.351729 kubelet[2630]: E0911 23:59:42.351617 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:42.352779 kubelet[2630]: E0911 23:59:42.352008 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:42.363772 kubelet[2630]: I0911 23:59:42.363596 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pt546" podStartSLOduration=1.363583354 podStartE2EDuration="1.363583354s" podCreationTimestamp="2025-09-11 23:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:59:42.363557834 +0000 UTC m=+9.142958669" watchObservedRunningTime="2025-09-11 23:59:42.363583354 +0000 UTC m=+9.142984189" Sep 11 23:59:43.685099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649423754.mount: Deactivated successfully. Sep 11 23:59:44.209145 containerd[1512]: time="2025-09-11T23:59:44.209079083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:44.217757 containerd[1512]: time="2025-09-11T23:59:44.217692516Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 11 23:59:44.218862 containerd[1512]: time="2025-09-11T23:59:44.218824000Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:44.221079 containerd[1512]: time="2025-09-11T23:59:44.221036048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:44.221961 containerd[1512]: time="2025-09-11T23:59:44.221843851Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.031377234s" Sep 11 23:59:44.221961 containerd[1512]: time="2025-09-11T23:59:44.221878091Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 11 23:59:44.226906 containerd[1512]: time="2025-09-11T23:59:44.226877070Z" level=info msg="CreateContainer within sandbox \"8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 23:59:44.232527 containerd[1512]: time="2025-09-11T23:59:44.232491611Z" level=info msg="Container c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:44.237779 containerd[1512]: time="2025-09-11T23:59:44.237711911Z" level=info msg="CreateContainer within sandbox \"8c0f99e1f7e8717003ad1fcd1ed595b10b9fe49ce406bbc6421ffe140c96698e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1\"" Sep 11 23:59:44.238413 containerd[1512]: time="2025-09-11T23:59:44.238368433Z" level=info msg="StartContainer for \"c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1\"" Sep 11 23:59:44.239283 containerd[1512]: time="2025-09-11T23:59:44.239098236Z" level=info msg="connecting to shim c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1" address="unix:///run/containerd/s/e76842a9280903bf669d9de2e4f1ddbaebb8610063f9b74cf8e9916e5a035e59" protocol=ttrpc version=3 Sep 11 23:59:44.261908 systemd[1]: Started cri-containerd-c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1.scope - libcontainer container c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1. Sep 11 23:59:44.299525 containerd[1512]: time="2025-09-11T23:59:44.299437142Z" level=info msg="StartContainer for \"c259f5c61f81c018e11f28d02aa90c62e42888cc0bb9791955f324bb9859a3d1\" returns successfully" Sep 11 23:59:46.421049 kubelet[2630]: E0911 23:59:46.421017 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:46.467194 kubelet[2630]: I0911 23:59:46.467123 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-kz6rk" podStartSLOduration=3.433020088 podStartE2EDuration="5.467107373s" podCreationTimestamp="2025-09-11 23:59:41 +0000 UTC" firstStartedPulling="2025-09-11 23:59:42.190013815 +0000 UTC m=+8.969414650" lastFinishedPulling="2025-09-11 23:59:44.2241011 +0000 UTC m=+11.003501935" observedRunningTime="2025-09-11 23:59:44.367727877 +0000 UTC m=+11.147128712" watchObservedRunningTime="2025-09-11 23:59:46.467107373 +0000 UTC m=+13.246508208" Sep 11 23:59:47.362275 kubelet[2630]: E0911 23:59:47.362232 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:47.571718 kubelet[2630]: E0911 23:59:47.571684 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:49.680792 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 11 23:59:49.682455 sshd[1704]: Connection closed by 10.0.0.1 port 53586 Sep 11 23:59:49.683040 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 11 23:59:49.687849 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:53586.service: Deactivated successfully. Sep 11 23:59:49.690108 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 23:59:49.690295 systemd[1]: session-7.scope: Consumed 6.079s CPU time, 224.4M memory peak. Sep 11 23:59:49.691412 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Sep 11 23:59:49.694195 systemd-logind[1488]: Removed session 7. Sep 11 23:59:50.961322 update_engine[1492]: I20250911 23:59:50.960788 1492 update_attempter.cc:509] Updating boot flags... Sep 11 23:59:53.795524 systemd[1]: Created slice kubepods-besteffort-pod2c9b77d1_0f1e_4e20_a8e1_45799c05bc98.slice - libcontainer container kubepods-besteffort-pod2c9b77d1_0f1e_4e20_a8e1_45799c05bc98.slice. Sep 11 23:59:53.823200 kubelet[2630]: I0911 23:59:53.823151 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c9b77d1-0f1e-4e20-a8e1-45799c05bc98-typha-certs\") pod \"calico-typha-5b958985f6-q8hws\" (UID: \"2c9b77d1-0f1e-4e20-a8e1-45799c05bc98\") " pod="calico-system/calico-typha-5b958985f6-q8hws" Sep 11 23:59:53.823200 kubelet[2630]: I0911 23:59:53.823198 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mlw\" (UniqueName: \"kubernetes.io/projected/2c9b77d1-0f1e-4e20-a8e1-45799c05bc98-kube-api-access-92mlw\") pod \"calico-typha-5b958985f6-q8hws\" (UID: \"2c9b77d1-0f1e-4e20-a8e1-45799c05bc98\") " pod="calico-system/calico-typha-5b958985f6-q8hws" Sep 11 23:59:53.823624 kubelet[2630]: I0911 23:59:53.823219 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c9b77d1-0f1e-4e20-a8e1-45799c05bc98-tigera-ca-bundle\") pod \"calico-typha-5b958985f6-q8hws\" (UID: \"2c9b77d1-0f1e-4e20-a8e1-45799c05bc98\") " pod="calico-system/calico-typha-5b958985f6-q8hws" Sep 11 23:59:54.100340 kubelet[2630]: E0911 23:59:54.099974 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:54.100854 containerd[1512]: time="2025-09-11T23:59:54.100821671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b958985f6-q8hws,Uid:2c9b77d1-0f1e-4e20-a8e1-45799c05bc98,Namespace:calico-system,Attempt:0,}" Sep 11 23:59:54.150514 containerd[1512]: time="2025-09-11T23:59:54.150466048Z" level=info msg="connecting to shim ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f" address="unix:///run/containerd/s/a675bef2e3abf947803a43728dac1b1bd3be812c88dd8bace6b3bb0e58d189af" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:54.167407 systemd[1]: Created slice kubepods-besteffort-pod6bcb6a3a_dcc7_40f0_bb59_619a0773d99b.slice - libcontainer container kubepods-besteffort-pod6bcb6a3a_dcc7_40f0_bb59_619a0773d99b.slice. Sep 11 23:59:54.203898 systemd[1]: Started cri-containerd-ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f.scope - libcontainer container ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f. Sep 11 23:59:54.226530 kubelet[2630]: I0911 23:59:54.226484 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-cni-log-dir\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226530 kubelet[2630]: I0911 23:59:54.226525 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-tigera-ca-bundle\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226530 kubelet[2630]: I0911 23:59:54.226543 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-var-run-calico\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226705 kubelet[2630]: I0911 23:59:54.226563 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zg8b\" (UniqueName: \"kubernetes.io/projected/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-kube-api-access-4zg8b\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226705 kubelet[2630]: I0911 23:59:54.226580 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-node-certs\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226705 kubelet[2630]: I0911 23:59:54.226597 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-policysync\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226705 kubelet[2630]: I0911 23:59:54.226621 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-cni-bin-dir\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226705 kubelet[2630]: I0911 23:59:54.226639 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-flexvol-driver-host\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226834 kubelet[2630]: I0911 23:59:54.226658 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-var-lib-calico\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226834 kubelet[2630]: I0911 23:59:54.226670 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-xtables-lock\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226834 kubelet[2630]: I0911 23:59:54.226686 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-cni-net-dir\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.226834 kubelet[2630]: I0911 23:59:54.226712 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bcb6a3a-dcc7-40f0-bb59-619a0773d99b-lib-modules\") pod \"calico-node-768q9\" (UID: \"6bcb6a3a-dcc7-40f0-bb59-619a0773d99b\") " pod="calico-system/calico-node-768q9" Sep 11 23:59:54.248366 containerd[1512]: time="2025-09-11T23:59:54.248331321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b958985f6-q8hws,Uid:2c9b77d1-0f1e-4e20-a8e1-45799c05bc98,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f\"" Sep 11 23:59:54.249386 kubelet[2630]: E0911 23:59:54.249360 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:54.253018 containerd[1512]: time="2025-09-11T23:59:54.252988490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 23:59:54.329485 kubelet[2630]: E0911 23:59:54.329443 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.329485 kubelet[2630]: W0911 23:59:54.329481 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.335463 kubelet[2630]: E0911 23:59:54.334529 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.335543 kubelet[2630]: E0911 23:59:54.335499 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.335543 kubelet[2630]: W0911 23:59:54.335515 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.335543 kubelet[2630]: E0911 23:59:54.335531 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.340138 kubelet[2630]: E0911 23:59:54.340079 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.340138 kubelet[2630]: W0911 23:59:54.340096 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.340138 kubelet[2630]: E0911 23:59:54.340109 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.432910 kubelet[2630]: E0911 23:59:54.432868 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 11 23:59:54.472082 containerd[1512]: time="2025-09-11T23:59:54.472032840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-768q9,Uid:6bcb6a3a-dcc7-40f0-bb59-619a0773d99b,Namespace:calico-system,Attempt:0,}" Sep 11 23:59:54.499339 containerd[1512]: time="2025-09-11T23:59:54.499288774Z" level=info msg="connecting to shim 736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c" address="unix:///run/containerd/s/47ebc990d069aa280af0881b09e65a66541d983bc3274983ab696607d0060a28" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:59:54.520167 kubelet[2630]: E0911 23:59:54.520131 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.520167 kubelet[2630]: W0911 23:59:54.520154 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.520305 kubelet[2630]: E0911 23:59:54.520175 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.520339 kubelet[2630]: E0911 23:59:54.520328 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.523327 kubelet[2630]: W0911 23:59:54.520336 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.523327 kubelet[2630]: E0911 23:59:54.523314 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.523516 kubelet[2630]: E0911 23:59:54.523487 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.523516 kubelet[2630]: W0911 23:59:54.523501 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.523516 kubelet[2630]: E0911 23:59:54.523511 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.523685 kubelet[2630]: E0911 23:59:54.523661 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.523685 kubelet[2630]: W0911 23:59:54.523673 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.523685 kubelet[2630]: E0911 23:59:54.523682 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.523858 kubelet[2630]: E0911 23:59:54.523846 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.523887 kubelet[2630]: W0911 23:59:54.523857 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.523887 kubelet[2630]: E0911 23:59:54.523866 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524002 kubelet[2630]: E0911 23:59:54.523990 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524002 kubelet[2630]: W0911 23:59:54.524001 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524042 kubelet[2630]: E0911 23:59:54.524009 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524131 kubelet[2630]: E0911 23:59:54.524121 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524154 kubelet[2630]: W0911 23:59:54.524131 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524154 kubelet[2630]: E0911 23:59:54.524138 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524277 kubelet[2630]: E0911 23:59:54.524265 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524277 kubelet[2630]: W0911 23:59:54.524276 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524320 kubelet[2630]: E0911 23:59:54.524284 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524442 kubelet[2630]: E0911 23:59:54.524429 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524465 kubelet[2630]: W0911 23:59:54.524441 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524465 kubelet[2630]: E0911 23:59:54.524449 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524578 kubelet[2630]: E0911 23:59:54.524568 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524600 kubelet[2630]: W0911 23:59:54.524577 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524600 kubelet[2630]: E0911 23:59:54.524586 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524705 kubelet[2630]: E0911 23:59:54.524695 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524727 kubelet[2630]: W0911 23:59:54.524705 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524727 kubelet[2630]: E0911 23:59:54.524712 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524852 kubelet[2630]: E0911 23:59:54.524841 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524852 kubelet[2630]: W0911 23:59:54.524851 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.524895 kubelet[2630]: E0911 23:59:54.524858 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.524997 kubelet[2630]: E0911 23:59:54.524985 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.524997 kubelet[2630]: W0911 23:59:54.524996 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525038 kubelet[2630]: E0911 23:59:54.525003 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525123 kubelet[2630]: E0911 23:59:54.525113 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.525123 kubelet[2630]: W0911 23:59:54.525122 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525162 kubelet[2630]: E0911 23:59:54.525129 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525247 kubelet[2630]: E0911 23:59:54.525232 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.525270 kubelet[2630]: W0911 23:59:54.525250 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525270 kubelet[2630]: E0911 23:59:54.525259 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525405 kubelet[2630]: E0911 23:59:54.525393 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.525405 kubelet[2630]: W0911 23:59:54.525403 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525445 kubelet[2630]: E0911 23:59:54.525411 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525551 kubelet[2630]: E0911 23:59:54.525540 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.525572 kubelet[2630]: W0911 23:59:54.525551 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525572 kubelet[2630]: E0911 23:59:54.525558 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525679 kubelet[2630]: E0911 23:59:54.525669 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.525679 kubelet[2630]: W0911 23:59:54.525678 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.525720 kubelet[2630]: E0911 23:59:54.525685 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.525946 systemd[1]: Started cri-containerd-736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c.scope - libcontainer container 736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c. Sep 11 23:59:54.526083 kubelet[2630]: E0911 23:59:54.526066 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.526083 kubelet[2630]: W0911 23:59:54.526080 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.526131 kubelet[2630]: E0911 23:59:54.526092 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.526548 kubelet[2630]: E0911 23:59:54.526514 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.526548 kubelet[2630]: W0911 23:59:54.526534 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.526548 kubelet[2630]: E0911 23:59:54.526547 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.529029 kubelet[2630]: E0911 23:59:54.529011 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.529029 kubelet[2630]: W0911 23:59:54.529027 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.529126 kubelet[2630]: E0911 23:59:54.529039 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.529126 kubelet[2630]: I0911 23:59:54.529065 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cbcc7da-746e-4cfb-9c94-96d5f1400fdf-registration-dir\") pod \"csi-node-driver-fv47h\" (UID: \"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf\") " pod="calico-system/csi-node-driver-fv47h" Sep 11 23:59:54.529230 kubelet[2630]: E0911 23:59:54.529218 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.529230 kubelet[2630]: W0911 23:59:54.529229 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.529295 kubelet[2630]: E0911 23:59:54.529237 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.529295 kubelet[2630]: I0911 23:59:54.529265 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cbcc7da-746e-4cfb-9c94-96d5f1400fdf-socket-dir\") pod \"csi-node-driver-fv47h\" (UID: \"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf\") " pod="calico-system/csi-node-driver-fv47h" Sep 11 23:59:54.529548 kubelet[2630]: E0911 23:59:54.529531 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.529548 kubelet[2630]: W0911 23:59:54.529548 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.529611 kubelet[2630]: E0911 23:59:54.529560 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.529814 kubelet[2630]: E0911 23:59:54.529800 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.529895 kubelet[2630]: W0911 23:59:54.529814 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.529895 kubelet[2630]: E0911 23:59:54.529824 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.530029 kubelet[2630]: E0911 23:59:54.530007 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.530029 kubelet[2630]: W0911 23:59:54.530024 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.530083 kubelet[2630]: E0911 23:59:54.530034 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.530755 kubelet[2630]: I0911 23:59:54.530672 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5g8\" (UniqueName: \"kubernetes.io/projected/0cbcc7da-746e-4cfb-9c94-96d5f1400fdf-kube-api-access-nr5g8\") pod \"csi-node-driver-fv47h\" (UID: \"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf\") " pod="calico-system/csi-node-driver-fv47h" Sep 11 23:59:54.530755 kubelet[2630]: E0911 23:59:54.530675 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.530755 kubelet[2630]: W0911 23:59:54.530732 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.530864 kubelet[2630]: E0911 23:59:54.530743 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.531512 kubelet[2630]: E0911 23:59:54.531412 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.531512 kubelet[2630]: W0911 23:59:54.531443 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.531512 kubelet[2630]: E0911 23:59:54.531457 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.531810 kubelet[2630]: E0911 23:59:54.531797 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.531887 kubelet[2630]: W0911 23:59:54.531875 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.531939 kubelet[2630]: E0911 23:59:54.531929 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.536589 kubelet[2630]: E0911 23:59:54.536558 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.536589 kubelet[2630]: W0911 23:59:54.536578 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.536589 kubelet[2630]: E0911 23:59:54.536593 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.536895 kubelet[2630]: I0911 23:59:54.536623 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cbcc7da-746e-4cfb-9c94-96d5f1400fdf-kubelet-dir\") pod \"csi-node-driver-fv47h\" (UID: \"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf\") " pod="calico-system/csi-node-driver-fv47h" Sep 11 23:59:54.537522 kubelet[2630]: E0911 23:59:54.537498 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.537522 kubelet[2630]: W0911 23:59:54.537518 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.537656 kubelet[2630]: E0911 23:59:54.537539 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.538193 kubelet[2630]: E0911 23:59:54.538015 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.538193 kubelet[2630]: W0911 23:59:54.538033 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.538193 kubelet[2630]: E0911 23:59:54.538054 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.538907 kubelet[2630]: E0911 23:59:54.538885 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.538907 kubelet[2630]: W0911 23:59:54.538903 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.538987 kubelet[2630]: E0911 23:59:54.538917 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.539531 kubelet[2630]: E0911 23:59:54.539199 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.539531 kubelet[2630]: W0911 23:59:54.539213 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.539531 kubelet[2630]: E0911 23:59:54.539223 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.539531 kubelet[2630]: I0911 23:59:54.539389 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0cbcc7da-746e-4cfb-9c94-96d5f1400fdf-varrun\") pod \"csi-node-driver-fv47h\" (UID: \"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf\") " pod="calico-system/csi-node-driver-fv47h" Sep 11 23:59:54.539531 kubelet[2630]: E0911 23:59:54.539482 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.539531 kubelet[2630]: W0911 23:59:54.539491 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.539531 kubelet[2630]: E0911 23:59:54.539502 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.539703 kubelet[2630]: E0911 23:59:54.539664 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.539703 kubelet[2630]: W0911 23:59:54.539674 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.539703 kubelet[2630]: E0911 23:59:54.539687 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.550164 containerd[1512]: time="2025-09-11T23:59:54.550127233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-768q9,Uid:6bcb6a3a-dcc7-40f0-bb59-619a0773d99b,Namespace:calico-system,Attempt:0,} returns sandbox id \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\"" Sep 11 23:59:54.640609 kubelet[2630]: E0911 23:59:54.640573 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.640609 kubelet[2630]: W0911 23:59:54.640596 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.640609 kubelet[2630]: E0911 23:59:54.640617 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.640844 kubelet[2630]: E0911 23:59:54.640814 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.640844 kubelet[2630]: W0911 23:59:54.640826 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.640844 kubelet[2630]: E0911 23:59:54.640836 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.641139 kubelet[2630]: E0911 23:59:54.641121 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.641203 kubelet[2630]: W0911 23:59:54.641190 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.641271 kubelet[2630]: E0911 23:59:54.641259 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.641473 kubelet[2630]: E0911 23:59:54.641459 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.641533 kubelet[2630]: W0911 23:59:54.641523 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.641580 kubelet[2630]: E0911 23:59:54.641571 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.641795 kubelet[2630]: E0911 23:59:54.641781 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.642028 kubelet[2630]: W0911 23:59:54.641860 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.642028 kubelet[2630]: E0911 23:59:54.641879 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.642161 kubelet[2630]: E0911 23:59:54.642149 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.642209 kubelet[2630]: W0911 23:59:54.642199 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.642272 kubelet[2630]: E0911 23:59:54.642261 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.642557 kubelet[2630]: E0911 23:59:54.642453 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.642557 kubelet[2630]: W0911 23:59:54.642463 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.642557 kubelet[2630]: E0911 23:59:54.642472 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.642705 kubelet[2630]: E0911 23:59:54.642693 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.642876 kubelet[2630]: W0911 23:59:54.642763 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.642876 kubelet[2630]: E0911 23:59:54.642780 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.643007 kubelet[2630]: E0911 23:59:54.642995 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.643156 kubelet[2630]: W0911 23:59:54.643056 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.643156 kubelet[2630]: E0911 23:59:54.643070 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.643277 kubelet[2630]: E0911 23:59:54.643265 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.643334 kubelet[2630]: W0911 23:59:54.643324 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.643382 kubelet[2630]: E0911 23:59:54.643372 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.643711 kubelet[2630]: E0911 23:59:54.643593 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.643711 kubelet[2630]: W0911 23:59:54.643604 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.643711 kubelet[2630]: E0911 23:59:54.643613 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.643884 kubelet[2630]: E0911 23:59:54.643871 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.643941 kubelet[2630]: W0911 23:59:54.643930 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.643988 kubelet[2630]: E0911 23:59:54.643979 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.644200 kubelet[2630]: E0911 23:59:54.644189 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.644285 kubelet[2630]: W0911 23:59:54.644273 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.644341 kubelet[2630]: E0911 23:59:54.644331 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.644661 kubelet[2630]: E0911 23:59:54.644557 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.644661 kubelet[2630]: W0911 23:59:54.644569 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.644661 kubelet[2630]: E0911 23:59:54.644578 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.644827 kubelet[2630]: E0911 23:59:54.644814 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.644892 kubelet[2630]: W0911 23:59:54.644880 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.644948 kubelet[2630]: E0911 23:59:54.644938 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.645222 kubelet[2630]: E0911 23:59:54.645131 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.645222 kubelet[2630]: W0911 23:59:54.645142 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.645222 kubelet[2630]: E0911 23:59:54.645151 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.645383 kubelet[2630]: E0911 23:59:54.645371 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.645431 kubelet[2630]: W0911 23:59:54.645421 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.645479 kubelet[2630]: E0911 23:59:54.645470 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.645906 kubelet[2630]: E0911 23:59:54.645882 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.645906 kubelet[2630]: W0911 23:59:54.645899 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.645978 kubelet[2630]: E0911 23:59:54.645912 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.646090 kubelet[2630]: E0911 23:59:54.646066 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.646090 kubelet[2630]: W0911 23:59:54.646078 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.646090 kubelet[2630]: E0911 23:59:54.646086 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.646233 kubelet[2630]: E0911 23:59:54.646221 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.646233 kubelet[2630]: W0911 23:59:54.646232 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.646305 kubelet[2630]: E0911 23:59:54.646240 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.646467 kubelet[2630]: E0911 23:59:54.646420 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.646467 kubelet[2630]: W0911 23:59:54.646431 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.646467 kubelet[2630]: E0911 23:59:54.646439 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.646636 kubelet[2630]: E0911 23:59:54.646622 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.646636 kubelet[2630]: W0911 23:59:54.646635 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.646695 kubelet[2630]: E0911 23:59:54.646643 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.646836 kubelet[2630]: E0911 23:59:54.646825 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.646836 kubelet[2630]: W0911 23:59:54.646835 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.646881 kubelet[2630]: E0911 23:59:54.646843 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.647330 kubelet[2630]: E0911 23:59:54.647316 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.647371 kubelet[2630]: W0911 23:59:54.647331 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.647371 kubelet[2630]: E0911 23:59:54.647361 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.647602 kubelet[2630]: E0911 23:59:54.647589 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.647602 kubelet[2630]: W0911 23:59:54.647602 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.647658 kubelet[2630]: E0911 23:59:54.647611 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:54.656887 kubelet[2630]: E0911 23:59:54.656854 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:54.656887 kubelet[2630]: W0911 23:59:54.656874 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:54.656993 kubelet[2630]: E0911 23:59:54.656976 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:55.297889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16125630.mount: Deactivated successfully. Sep 11 23:59:56.321376 kubelet[2630]: E0911 23:59:56.321314 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 11 23:59:57.405790 containerd[1512]: time="2025-09-11T23:59:57.405463780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:57.406615 containerd[1512]: time="2025-09-11T23:59:57.406588182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 11 23:59:57.408555 containerd[1512]: time="2025-09-11T23:59:57.408345185Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:57.411878 containerd[1512]: time="2025-09-11T23:59:57.411842471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:57.413031 containerd[1512]: time="2025-09-11T23:59:57.412591032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 3.159567462s" Sep 11 23:59:57.413216 containerd[1512]: time="2025-09-11T23:59:57.413092033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 11 23:59:57.417230 containerd[1512]: time="2025-09-11T23:59:57.417198719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 23:59:57.432207 containerd[1512]: time="2025-09-11T23:59:57.432165824Z" level=info msg="CreateContainer within sandbox \"ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 23:59:57.440350 containerd[1512]: time="2025-09-11T23:59:57.439468275Z" level=info msg="Container ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:57.450246 containerd[1512]: time="2025-09-11T23:59:57.450199093Z" level=info msg="CreateContainer within sandbox \"ad7f97a09cd21626f44c89bc1cdd7ffad370130020e0a5e0fc5ea4c5f17c0d9f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43\"" Sep 11 23:59:57.454325 containerd[1512]: time="2025-09-11T23:59:57.454298579Z" level=info msg="StartContainer for \"ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43\"" Sep 11 23:59:57.456373 containerd[1512]: time="2025-09-11T23:59:57.456347983Z" level=info msg="connecting to shim ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43" address="unix:///run/containerd/s/a675bef2e3abf947803a43728dac1b1bd3be812c88dd8bace6b3bb0e58d189af" protocol=ttrpc version=3 Sep 11 23:59:57.476898 systemd[1]: Started cri-containerd-ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43.scope - libcontainer container ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43. Sep 11 23:59:57.509981 containerd[1512]: time="2025-09-11T23:59:57.509943110Z" level=info msg="StartContainer for \"ba5c961b44fb986e0f3be44ea7c4d0b243e9ba6842616fede7b1b166a2701d43\" returns successfully" Sep 11 23:59:58.321650 kubelet[2630]: E0911 23:59:58.321091 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 11 23:59:58.399842 kubelet[2630]: E0911 23:59:58.399614 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:58.416531 kubelet[2630]: I0911 23:59:58.416455 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b958985f6-q8hws" podStartSLOduration=2.24969702 podStartE2EDuration="5.416439334s" podCreationTimestamp="2025-09-11 23:59:53 +0000 UTC" firstStartedPulling="2025-09-11 23:59:54.250318365 +0000 UTC m=+21.029719200" lastFinishedPulling="2025-09-11 23:59:57.417060599 +0000 UTC m=+24.196461514" observedRunningTime="2025-09-11 23:59:58.416226334 +0000 UTC m=+25.195627169" watchObservedRunningTime="2025-09-11 23:59:58.416439334 +0000 UTC m=+25.195840169" Sep 11 23:59:58.456039 containerd[1512]: time="2025-09-11T23:59:58.455987274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:58.456824 containerd[1512]: time="2025-09-11T23:59:58.456701355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 11 23:59:58.458502 containerd[1512]: time="2025-09-11T23:59:58.458468758Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:58.459994 kubelet[2630]: E0911 23:59:58.459966 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.459994 kubelet[2630]: W0911 23:59:58.459990 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.460068 kubelet[2630]: E0911 23:59:58.460012 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.460192 kubelet[2630]: E0911 23:59:58.460180 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.460228 kubelet[2630]: W0911 23:59:58.460190 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.460284 kubelet[2630]: E0911 23:59:58.460230 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.460466 kubelet[2630]: E0911 23:59:58.460452 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.460466 kubelet[2630]: W0911 23:59:58.460464 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.460521 kubelet[2630]: E0911 23:59:58.460473 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.460711 containerd[1512]: time="2025-09-11T23:59:58.460682601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:59:58.460784 kubelet[2630]: E0911 23:59:58.460772 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.460818 kubelet[2630]: W0911 23:59:58.460784 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.460818 kubelet[2630]: E0911 23:59:58.460793 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.461346 containerd[1512]: time="2025-09-11T23:59:58.461310762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.044086363s" Sep 11 23:59:58.461346 containerd[1512]: time="2025-09-11T23:59:58.461341682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 11 23:59:58.461490 kubelet[2630]: E0911 23:59:58.461474 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.461490 kubelet[2630]: W0911 23:59:58.461489 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.461549 kubelet[2630]: E0911 23:59:58.461499 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.461650 kubelet[2630]: E0911 23:59:58.461635 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.461650 kubelet[2630]: W0911 23:59:58.461648 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.461711 kubelet[2630]: E0911 23:59:58.461656 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.461850 kubelet[2630]: E0911 23:59:58.461837 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.461850 kubelet[2630]: W0911 23:59:58.461850 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.461913 kubelet[2630]: E0911 23:59:58.461857 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462001 kubelet[2630]: E0911 23:59:58.461989 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462001 kubelet[2630]: W0911 23:59:58.461999 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462044 kubelet[2630]: E0911 23:59:58.462007 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462156 kubelet[2630]: E0911 23:59:58.462144 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462156 kubelet[2630]: W0911 23:59:58.462155 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462213 kubelet[2630]: E0911 23:59:58.462162 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462302 kubelet[2630]: E0911 23:59:58.462287 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462302 kubelet[2630]: W0911 23:59:58.462298 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462356 kubelet[2630]: E0911 23:59:58.462306 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462453 kubelet[2630]: E0911 23:59:58.462441 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462453 kubelet[2630]: W0911 23:59:58.462452 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462505 kubelet[2630]: E0911 23:59:58.462459 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462578 kubelet[2630]: E0911 23:59:58.462568 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462578 kubelet[2630]: W0911 23:59:58.462577 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462621 kubelet[2630]: E0911 23:59:58.462584 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.462709 kubelet[2630]: E0911 23:59:58.462699 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.462732 kubelet[2630]: W0911 23:59:58.462709 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.462732 kubelet[2630]: E0911 23:59:58.462716 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.463653 kubelet[2630]: E0911 23:59:58.462864 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.463653 kubelet[2630]: W0911 23:59:58.462873 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.463653 kubelet[2630]: E0911 23:59:58.462881 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.463653 kubelet[2630]: E0911 23:59:58.463142 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.463653 kubelet[2630]: W0911 23:59:58.463150 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.463653 kubelet[2630]: E0911 23:59:58.463157 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.465274 containerd[1512]: time="2025-09-11T23:59:58.465246128Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 23:59:58.469174 kubelet[2630]: E0911 23:59:58.469151 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.469174 kubelet[2630]: W0911 23:59:58.469171 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.469262 kubelet[2630]: E0911 23:59:58.469191 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.469388 kubelet[2630]: E0911 23:59:58.469373 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.469388 kubelet[2630]: W0911 23:59:58.469385 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.469441 kubelet[2630]: E0911 23:59:58.469393 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.469614 kubelet[2630]: E0911 23:59:58.469601 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.469638 kubelet[2630]: W0911 23:59:58.469614 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.469638 kubelet[2630]: E0911 23:59:58.469624 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.470916 kubelet[2630]: E0911 23:59:58.470891 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.470916 kubelet[2630]: W0911 23:59:58.470912 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.470979 kubelet[2630]: E0911 23:59:58.470926 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.471095 kubelet[2630]: E0911 23:59:58.471083 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.471095 kubelet[2630]: W0911 23:59:58.471095 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.471141 kubelet[2630]: E0911 23:59:58.471103 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.471290 kubelet[2630]: E0911 23:59:58.471277 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.471290 kubelet[2630]: W0911 23:59:58.471287 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.471348 kubelet[2630]: E0911 23:59:58.471296 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.471463 kubelet[2630]: E0911 23:59:58.471449 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.471463 kubelet[2630]: W0911 23:59:58.471461 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.471508 kubelet[2630]: E0911 23:59:58.471469 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.471674 kubelet[2630]: E0911 23:59:58.471598 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.471701 kubelet[2630]: W0911 23:59:58.471677 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.471701 kubelet[2630]: E0911 23:59:58.471690 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.472336 kubelet[2630]: E0911 23:59:58.472317 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.472336 kubelet[2630]: W0911 23:59:58.472334 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.472397 kubelet[2630]: E0911 23:59:58.472347 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.472656 kubelet[2630]: E0911 23:59:58.472640 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.472656 kubelet[2630]: W0911 23:59:58.472655 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.472713 kubelet[2630]: E0911 23:59:58.472665 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.472841 kubelet[2630]: E0911 23:59:58.472828 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.472841 kubelet[2630]: W0911 23:59:58.472840 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.472902 kubelet[2630]: E0911 23:59:58.472848 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.472988 kubelet[2630]: E0911 23:59:58.472973 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.472988 kubelet[2630]: W0911 23:59:58.472986 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.473035 kubelet[2630]: E0911 23:59:58.472993 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.473965 containerd[1512]: time="2025-09-11T23:59:58.473933982Z" level=info msg="Container 34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:59:58.475542 kubelet[2630]: E0911 23:59:58.475526 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.475542 kubelet[2630]: W0911 23:59:58.475542 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.475611 kubelet[2630]: E0911 23:59:58.475555 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.475744 kubelet[2630]: E0911 23:59:58.475732 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.475799 kubelet[2630]: W0911 23:59:58.475743 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.475799 kubelet[2630]: E0911 23:59:58.475785 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.475961 kubelet[2630]: E0911 23:59:58.475950 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.475961 kubelet[2630]: W0911 23:59:58.475961 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.476017 kubelet[2630]: E0911 23:59:58.475971 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.476137 kubelet[2630]: E0911 23:59:58.476128 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.476167 kubelet[2630]: W0911 23:59:58.476138 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.476167 kubelet[2630]: E0911 23:59:58.476146 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.478355 kubelet[2630]: E0911 23:59:58.477396 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.478416 kubelet[2630]: W0911 23:59:58.478353 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.478416 kubelet[2630]: E0911 23:59:58.478370 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.481033 kubelet[2630]: E0911 23:59:58.481010 2630 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:59:58.481089 kubelet[2630]: W0911 23:59:58.481033 2630 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:59:58.481089 kubelet[2630]: E0911 23:59:58.481051 2630 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:59:58.487761 containerd[1512]: time="2025-09-11T23:59:58.487517722Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\"" Sep 11 23:59:58.488348 containerd[1512]: time="2025-09-11T23:59:58.488269563Z" level=info msg="StartContainer for \"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\"" Sep 11 23:59:58.489687 containerd[1512]: time="2025-09-11T23:59:58.489664965Z" level=info msg="connecting to shim 34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937" address="unix:///run/containerd/s/47ebc990d069aa280af0881b09e65a66541d983bc3274983ab696607d0060a28" protocol=ttrpc version=3 Sep 11 23:59:58.514020 systemd[1]: Started cri-containerd-34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937.scope - libcontainer container 34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937. Sep 11 23:59:58.570301 systemd[1]: cri-containerd-34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937.scope: Deactivated successfully. Sep 11 23:59:58.593136 containerd[1512]: time="2025-09-11T23:59:58.593012882Z" level=info msg="StartContainer for \"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\" returns successfully" Sep 11 23:59:58.605777 containerd[1512]: time="2025-09-11T23:59:58.605398861Z" level=info msg="received exit event container_id:\"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\" id:\"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\" pid:3339 exited_at:{seconds:1757635198 nanos:578163660}" Sep 11 23:59:58.608179 containerd[1512]: time="2025-09-11T23:59:58.608148905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\" id:\"34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937\" pid:3339 exited_at:{seconds:1757635198 nanos:578163660}" Sep 11 23:59:58.639430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34db9266ec626d509c72a9c82fded9d63b139c94ba47b38397b8e6cdd3c4e937-rootfs.mount: Deactivated successfully. Sep 11 23:59:59.405773 kubelet[2630]: I0911 23:59:59.405344 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:59:59.407337 kubelet[2630]: E0911 23:59:59.406864 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:59:59.408630 containerd[1512]: time="2025-09-11T23:59:59.408582281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 00:00:00.322158 kubelet[2630]: E0912 00:00:00.320861 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 12 00:00:02.118871 containerd[1512]: time="2025-09-12T00:00:02.118816445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:02.119921 containerd[1512]: time="2025-09-12T00:00:02.119889606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 00:00:02.120679 containerd[1512]: time="2025-09-12T00:00:02.120650967Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:02.122416 containerd[1512]: time="2025-09-12T00:00:02.122386289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:02.123333 containerd[1512]: time="2025-09-12T00:00:02.123293730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.714673649s" Sep 12 00:00:02.123333 containerd[1512]: time="2025-09-12T00:00:02.123327370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 00:00:02.127984 containerd[1512]: time="2025-09-12T00:00:02.127927775Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 00:00:02.149791 containerd[1512]: time="2025-09-12T00:00:02.149714241Z" level=info msg="Container 5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:02.158742 containerd[1512]: time="2025-09-12T00:00:02.158692331Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\"" Sep 12 00:00:02.159801 containerd[1512]: time="2025-09-12T00:00:02.159177692Z" level=info msg="StartContainer for \"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\"" Sep 12 00:00:02.160661 containerd[1512]: time="2025-09-12T00:00:02.160638214Z" level=info msg="connecting to shim 5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96" address="unix:///run/containerd/s/47ebc990d069aa280af0881b09e65a66541d983bc3274983ab696607d0060a28" protocol=ttrpc version=3 Sep 12 00:00:02.181937 systemd[1]: Started cri-containerd-5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96.scope - libcontainer container 5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96. Sep 12 00:00:02.218372 containerd[1512]: time="2025-09-12T00:00:02.218133281Z" level=info msg="StartContainer for \"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\" returns successfully" Sep 12 00:00:02.321218 kubelet[2630]: E0912 00:00:02.321156 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 12 00:00:02.864119 containerd[1512]: time="2025-09-12T00:00:02.863819558Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 00:00:02.866502 systemd[1]: cri-containerd-5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96.scope: Deactivated successfully. Sep 12 00:00:02.866798 systemd[1]: cri-containerd-5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96.scope: Consumed 457ms CPU time, 176.8M memory peak, 3.3M read from disk, 165.8M written to disk. Sep 12 00:00:02.868098 containerd[1512]: time="2025-09-12T00:00:02.868062683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\" id:\"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\" pid:3398 exited_at:{seconds:1757635202 nanos:867714482}" Sep 12 00:00:02.872930 containerd[1512]: time="2025-09-12T00:00:02.872884888Z" level=info msg="received exit event container_id:\"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\" id:\"5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96\" pid:3398 exited_at:{seconds:1757635202 nanos:867714482}" Sep 12 00:00:02.891022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5708a1d58af825a7e1bb88dc267d17d56f6145e7bfc205609fb822972f249e96-rootfs.mount: Deactivated successfully. Sep 12 00:00:02.960798 kubelet[2630]: I0912 00:00:02.960460 2630 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 00:00:03.009865 systemd[1]: Created slice kubepods-burstable-pod093381d9_e75d_4201_91b6_3a2eb516857c.slice - libcontainer container kubepods-burstable-pod093381d9_e75d_4201_91b6_3a2eb516857c.slice. Sep 12 00:00:03.017907 systemd[1]: Created slice kubepods-besteffort-pod743f1d9d_1a8f_4684_bf04_5f3c17d51801.slice - libcontainer container kubepods-besteffort-pod743f1d9d_1a8f_4684_bf04_5f3c17d51801.slice. Sep 12 00:00:03.026256 systemd[1]: Created slice kubepods-burstable-pod3bbd14ad_e4c3_4bf0_abfe_08d2fa32acfd.slice - libcontainer container kubepods-burstable-pod3bbd14ad_e4c3_4bf0_abfe_08d2fa32acfd.slice. Sep 12 00:00:03.032549 systemd[1]: Created slice kubepods-besteffort-pod59115248_060d_4fd7_a923_109785bfe839.slice - libcontainer container kubepods-besteffort-pod59115248_060d_4fd7_a923_109785bfe839.slice. Sep 12 00:00:03.038060 systemd[1]: Created slice kubepods-besteffort-podd1a33081_1e30_4a18_baff_a8e2bfa85db2.slice - libcontainer container kubepods-besteffort-podd1a33081_1e30_4a18_baff_a8e2bfa85db2.slice. Sep 12 00:00:03.045007 systemd[1]: Created slice kubepods-besteffort-pod3c075dbc_5473_4587_8e32_5346879edeb3.slice - libcontainer container kubepods-besteffort-pod3c075dbc_5473_4587_8e32_5346879edeb3.slice. Sep 12 00:00:03.049301 systemd[1]: Created slice kubepods-besteffort-pod033b34fe_8d24_40ac_9f3c_8e88b05e828e.slice - libcontainer container kubepods-besteffort-pod033b34fe_8d24_40ac_9f3c_8e88b05e828e.slice. Sep 12 00:00:03.101286 kubelet[2630]: I0912 00:00:03.101242 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/033b34fe-8d24-40ac-9f3c-8e88b05e828e-calico-apiserver-certs\") pod \"calico-apiserver-948cb9647-j6lg2\" (UID: \"033b34fe-8d24-40ac-9f3c-8e88b05e828e\") " pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:03.101286 kubelet[2630]: I0912 00:00:03.101289 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743f1d9d-1a8f-4684-bf04-5f3c17d51801-config\") pod \"goldmane-54d579b49d-g56ln\" (UID: \"743f1d9d-1a8f-4684-bf04-5f3c17d51801\") " pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.101445 kubelet[2630]: I0912 00:00:03.101307 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkntv\" (UniqueName: \"kubernetes.io/projected/743f1d9d-1a8f-4684-bf04-5f3c17d51801-kube-api-access-gkntv\") pod \"goldmane-54d579b49d-g56ln\" (UID: \"743f1d9d-1a8f-4684-bf04-5f3c17d51801\") " pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.101445 kubelet[2630]: I0912 00:00:03.101325 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwns6\" (UniqueName: \"kubernetes.io/projected/d1a33081-1e30-4a18-baff-a8e2bfa85db2-kube-api-access-mwns6\") pod \"calico-apiserver-948cb9647-w47mh\" (UID: \"d1a33081-1e30-4a18-baff-a8e2bfa85db2\") " pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:03.101445 kubelet[2630]: I0912 00:00:03.101342 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t5j\" (UniqueName: \"kubernetes.io/projected/3c075dbc-5473-4587-8e32-5346879edeb3-kube-api-access-h7t5j\") pod \"calico-kube-controllers-669f89fcc5-8lg6p\" (UID: \"3c075dbc-5473-4587-8e32-5346879edeb3\") " pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:03.101445 kubelet[2630]: I0912 00:00:03.101362 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59115248-060d-4fd7-a923-109785bfe839-whisker-backend-key-pair\") pod \"whisker-bf794d5db-pmpnk\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:03.101445 kubelet[2630]: I0912 00:00:03.101379 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1a33081-1e30-4a18-baff-a8e2bfa85db2-calico-apiserver-certs\") pod \"calico-apiserver-948cb9647-w47mh\" (UID: \"d1a33081-1e30-4a18-baff-a8e2bfa85db2\") " pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:03.101546 kubelet[2630]: I0912 00:00:03.101394 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c075dbc-5473-4587-8e32-5346879edeb3-tigera-ca-bundle\") pod \"calico-kube-controllers-669f89fcc5-8lg6p\" (UID: \"3c075dbc-5473-4587-8e32-5346879edeb3\") " pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:03.101546 kubelet[2630]: I0912 00:00:03.101408 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093381d9-e75d-4201-91b6-3a2eb516857c-config-volume\") pod \"coredns-674b8bbfcf-nbffs\" (UID: \"093381d9-e75d-4201-91b6-3a2eb516857c\") " pod="kube-system/coredns-674b8bbfcf-nbffs" Sep 12 00:00:03.101546 kubelet[2630]: I0912 00:00:03.101423 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd-config-volume\") pod \"coredns-674b8bbfcf-clc9x\" (UID: \"3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd\") " pod="kube-system/coredns-674b8bbfcf-clc9x" Sep 12 00:00:03.101546 kubelet[2630]: I0912 00:00:03.101441 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/743f1d9d-1a8f-4684-bf04-5f3c17d51801-goldmane-key-pair\") pod \"goldmane-54d579b49d-g56ln\" (UID: \"743f1d9d-1a8f-4684-bf04-5f3c17d51801\") " pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.101546 kubelet[2630]: I0912 00:00:03.101455 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59115248-060d-4fd7-a923-109785bfe839-whisker-ca-bundle\") pod \"whisker-bf794d5db-pmpnk\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:03.101656 kubelet[2630]: I0912 00:00:03.101472 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s775t\" (UniqueName: \"kubernetes.io/projected/3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd-kube-api-access-s775t\") pod \"coredns-674b8bbfcf-clc9x\" (UID: \"3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd\") " pod="kube-system/coredns-674b8bbfcf-clc9x" Sep 12 00:00:03.101656 kubelet[2630]: I0912 00:00:03.101490 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnpbj\" (UniqueName: \"kubernetes.io/projected/033b34fe-8d24-40ac-9f3c-8e88b05e828e-kube-api-access-qnpbj\") pod \"calico-apiserver-948cb9647-j6lg2\" (UID: \"033b34fe-8d24-40ac-9f3c-8e88b05e828e\") " pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:03.101656 kubelet[2630]: I0912 00:00:03.101516 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/743f1d9d-1a8f-4684-bf04-5f3c17d51801-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-g56ln\" (UID: \"743f1d9d-1a8f-4684-bf04-5f3c17d51801\") " pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.101656 kubelet[2630]: I0912 00:00:03.101535 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2zl\" (UniqueName: \"kubernetes.io/projected/59115248-060d-4fd7-a923-109785bfe839-kube-api-access-jf2zl\") pod \"whisker-bf794d5db-pmpnk\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:03.101656 kubelet[2630]: I0912 00:00:03.101549 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2km\" (UniqueName: \"kubernetes.io/projected/093381d9-e75d-4201-91b6-3a2eb516857c-kube-api-access-5c2km\") pod \"coredns-674b8bbfcf-nbffs\" (UID: \"093381d9-e75d-4201-91b6-3a2eb516857c\") " pod="kube-system/coredns-674b8bbfcf-nbffs" Sep 12 00:00:03.313658 kubelet[2630]: E0912 00:00:03.313593 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:03.314591 containerd[1512]: time="2025-09-12T00:00:03.314529903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbffs,Uid:093381d9-e75d-4201-91b6-3a2eb516857c,Namespace:kube-system,Attempt:0,}" Sep 12 00:00:03.322847 containerd[1512]: time="2025-09-12T00:00:03.322777312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:03.329987 kubelet[2630]: E0912 00:00:03.329957 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:03.330673 containerd[1512]: time="2025-09-12T00:00:03.330600201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clc9x,Uid:3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd,Namespace:kube-system,Attempt:0,}" Sep 12 00:00:03.335894 containerd[1512]: time="2025-09-12T00:00:03.335866446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf794d5db-pmpnk,Uid:59115248-060d-4fd7-a923-109785bfe839,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:03.342171 containerd[1512]: time="2025-09-12T00:00:03.342116013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:03.350001 containerd[1512]: time="2025-09-12T00:00:03.349958462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:03.356608 containerd[1512]: time="2025-09-12T00:00:03.356473989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:03.434551 containerd[1512]: time="2025-09-12T00:00:03.434501715Z" level=error msg="Failed to destroy network for sandbox \"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.440288 containerd[1512]: time="2025-09-12T00:00:03.440165161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clc9x,Uid:3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.455112 containerd[1512]: time="2025-09-12T00:00:03.454419577Z" level=error msg="Failed to destroy network for sandbox \"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.456444 containerd[1512]: time="2025-09-12T00:00:03.456395659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbffs,Uid:093381d9-e75d-4201-91b6-3a2eb516857c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.457622 kubelet[2630]: E0912 00:00:03.457554 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.457723 kubelet[2630]: E0912 00:00:03.457649 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clc9x" Sep 12 00:00:03.457723 kubelet[2630]: E0912 00:00:03.457671 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clc9x" Sep 12 00:00:03.457806 kubelet[2630]: E0912 00:00:03.457721 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-clc9x_kube-system(3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-clc9x_kube-system(3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb084a1cdad444b5dd59f3396c49e007a611525531bf853b3e548f034e9d769f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-clc9x" podUID="3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd" Sep 12 00:00:03.459347 kubelet[2630]: E0912 00:00:03.459220 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.459347 kubelet[2630]: E0912 00:00:03.459275 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nbffs" Sep 12 00:00:03.459347 kubelet[2630]: E0912 00:00:03.459294 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nbffs" Sep 12 00:00:03.459471 kubelet[2630]: E0912 00:00:03.459333 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nbffs_kube-system(093381d9-e75d-4201-91b6-3a2eb516857c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nbffs_kube-system(093381d9-e75d-4201-91b6-3a2eb516857c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"747a898010b69a6550c5610abf6ebba832b30c724312240d871e86f07c5cc678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nbffs" podUID="093381d9-e75d-4201-91b6-3a2eb516857c" Sep 12 00:00:03.461673 containerd[1512]: time="2025-09-12T00:00:03.461435784Z" level=error msg="Failed to destroy network for sandbox \"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.462350 containerd[1512]: time="2025-09-12T00:00:03.462281545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 00:00:03.464549 containerd[1512]: time="2025-09-12T00:00:03.464504828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.464896 kubelet[2630]: E0912 00:00:03.464862 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.464949 kubelet[2630]: E0912 00:00:03.464931 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.464982 kubelet[2630]: E0912 00:00:03.464952 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:03.465084 kubelet[2630]: E0912 00:00:03.465004 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-g56ln_calico-system(743f1d9d-1a8f-4684-bf04-5f3c17d51801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-g56ln_calico-system(743f1d9d-1a8f-4684-bf04-5f3c17d51801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b44eeabc6fd2a9f194a35349b327fa98179d23c872122a63410e0e0be1e5e029\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-g56ln" podUID="743f1d9d-1a8f-4684-bf04-5f3c17d51801" Sep 12 00:00:03.473661 containerd[1512]: time="2025-09-12T00:00:03.473614118Z" level=error msg="Failed to destroy network for sandbox \"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.475485 containerd[1512]: time="2025-09-12T00:00:03.475286000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf794d5db-pmpnk,Uid:59115248-060d-4fd7-a923-109785bfe839,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.476063 kubelet[2630]: E0912 00:00:03.475895 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.476063 kubelet[2630]: E0912 00:00:03.475955 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:03.476063 kubelet[2630]: E0912 00:00:03.475975 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:03.476190 kubelet[2630]: E0912 00:00:03.476015 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bf794d5db-pmpnk_calico-system(59115248-060d-4fd7-a923-109785bfe839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bf794d5db-pmpnk_calico-system(59115248-060d-4fd7-a923-109785bfe839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"125a792492844232861c667e21840aa0ce7da38dfc6881ba1c83b332229cf010\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bf794d5db-pmpnk" podUID="59115248-060d-4fd7-a923-109785bfe839" Sep 12 00:00:03.498661 containerd[1512]: time="2025-09-12T00:00:03.498254705Z" level=error msg="Failed to destroy network for sandbox \"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.500532 containerd[1512]: time="2025-09-12T00:00:03.500496147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.500976 kubelet[2630]: E0912 00:00:03.500928 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.501048 kubelet[2630]: E0912 00:00:03.501001 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:03.501048 kubelet[2630]: E0912 00:00:03.501023 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:03.501097 kubelet[2630]: E0912 00:00:03.501065 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cb9647-j6lg2_calico-apiserver(033b34fe-8d24-40ac-9f3c-8e88b05e828e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cb9647-j6lg2_calico-apiserver(033b34fe-8d24-40ac-9f3c-8e88b05e828e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796c6b443834abef380c4194830fb7c4b9499e18254f29306e374e03c7dd8cd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" podUID="033b34fe-8d24-40ac-9f3c-8e88b05e828e" Sep 12 00:00:03.502876 containerd[1512]: time="2025-09-12T00:00:03.502005749Z" level=error msg="Failed to destroy network for sandbox \"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.504390 containerd[1512]: time="2025-09-12T00:00:03.504346712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.505772 kubelet[2630]: E0912 00:00:03.505725 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.505833 kubelet[2630]: E0912 00:00:03.505794 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:03.505833 kubelet[2630]: E0912 00:00:03.505813 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:03.505896 kubelet[2630]: E0912 00:00:03.505850 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cb9647-w47mh_calico-apiserver(d1a33081-1e30-4a18-baff-a8e2bfa85db2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cb9647-w47mh_calico-apiserver(d1a33081-1e30-4a18-baff-a8e2bfa85db2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a9ad0488839c0321ff0cc04218c783126b4e594c4d74e1a9c319a882a774e74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" podUID="d1a33081-1e30-4a18-baff-a8e2bfa85db2" Sep 12 00:00:03.510565 containerd[1512]: time="2025-09-12T00:00:03.510446918Z" level=error msg="Failed to destroy network for sandbox \"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.511560 containerd[1512]: time="2025-09-12T00:00:03.511506559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.511918 kubelet[2630]: E0912 00:00:03.511879 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:03.511969 kubelet[2630]: E0912 00:00:03.511935 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:03.511969 kubelet[2630]: E0912 00:00:03.511955 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:03.512042 kubelet[2630]: E0912 00:00:03.512000 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669f89fcc5-8lg6p_calico-system(3c075dbc-5473-4587-8e32-5346879edeb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669f89fcc5-8lg6p_calico-system(3c075dbc-5473-4587-8e32-5346879edeb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8725e6d5cd4a9860fd6eac53413be15567c1c5c1b39d3bef453eff3adce45836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" podUID="3c075dbc-5473-4587-8e32-5346879edeb3" Sep 12 00:00:04.326725 systemd[1]: Created slice kubepods-besteffort-pod0cbcc7da_746e_4cfb_9c94_96d5f1400fdf.slice - libcontainer container kubepods-besteffort-pod0cbcc7da_746e_4cfb_9c94_96d5f1400fdf.slice. Sep 12 00:00:04.329314 containerd[1512]: time="2025-09-12T00:00:04.329262115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:04.373846 containerd[1512]: time="2025-09-12T00:00:04.373792361Z" level=error msg="Failed to destroy network for sandbox \"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:04.375061 containerd[1512]: time="2025-09-12T00:00:04.375026082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:04.375617 systemd[1]: run-netns-cni\x2d55d6ceed\x2d36c3\x2d37e8\x2d3542\x2dd3d48bccda16.mount: Deactivated successfully. Sep 12 00:00:04.376420 kubelet[2630]: E0912 00:00:04.375920 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:04.376420 kubelet[2630]: E0912 00:00:04.376019 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fv47h" Sep 12 00:00:04.376420 kubelet[2630]: E0912 00:00:04.376039 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fv47h" Sep 12 00:00:04.376698 kubelet[2630]: E0912 00:00:04.376354 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fv47h_calico-system(0cbcc7da-746e-4cfb-9c94-96d5f1400fdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fv47h_calico-system(0cbcc7da-746e-4cfb-9c94-96d5f1400fdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ba6cdb0d8256be161d06cf89e1079895d5ea8567ce8df32d77675fdc3ce832a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 12 00:00:14.322176 containerd[1512]: time="2025-09-12T00:00:14.322083867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:14.322904 containerd[1512]: time="2025-09-12T00:00:14.322873068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:14.380785 containerd[1512]: time="2025-09-12T00:00:14.380725899Z" level=error msg="Failed to destroy network for sandbox \"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.383228 containerd[1512]: time="2025-09-12T00:00:14.383024540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.383376 systemd[1]: run-netns-cni\x2d0e07eb65\x2db723\x2d658f\x2d7107\x2d5c65d13cb9c5.mount: Deactivated successfully. Sep 12 00:00:14.384485 kubelet[2630]: E0912 00:00:14.384423 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.384717 kubelet[2630]: E0912 00:00:14.384486 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:14.384717 kubelet[2630]: E0912 00:00:14.384507 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" Sep 12 00:00:14.384717 kubelet[2630]: E0912 00:00:14.384554 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cb9647-w47mh_calico-apiserver(d1a33081-1e30-4a18-baff-a8e2bfa85db2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cb9647-w47mh_calico-apiserver(d1a33081-1e30-4a18-baff-a8e2bfa85db2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b676725a09d500a90d2c37984ec361f2d0e88cc315b9ba1bb829018a7d1812e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" podUID="d1a33081-1e30-4a18-baff-a8e2bfa85db2" Sep 12 00:00:14.401721 containerd[1512]: time="2025-09-12T00:00:14.401670030Z" level=error msg="Failed to destroy network for sandbox \"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.403415 containerd[1512]: time="2025-09-12T00:00:14.403372751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.403564 systemd[1]: run-netns-cni\x2d248ce85e\x2db67f\x2df87a\x2d2cac\x2d724cd738f4a5.mount: Deactivated successfully. Sep 12 00:00:14.404553 kubelet[2630]: E0912 00:00:14.404275 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:14.404553 kubelet[2630]: E0912 00:00:14.404356 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:14.404553 kubelet[2630]: E0912 00:00:14.404379 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g56ln" Sep 12 00:00:14.404676 kubelet[2630]: E0912 00:00:14.404490 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-g56ln_calico-system(743f1d9d-1a8f-4684-bf04-5f3c17d51801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-g56ln_calico-system(743f1d9d-1a8f-4684-bf04-5f3c17d51801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1317813e1b1506246c011a1d370d06b5b045c33a59532630d1b98e91c8821224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-g56ln" podUID="743f1d9d-1a8f-4684-bf04-5f3c17d51801" Sep 12 00:00:15.253165 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:59204.service - OpenSSH per-connection server daemon (10.0.0.1:59204). Sep 12 00:00:15.322098 containerd[1512]: time="2025-09-12T00:00:15.322040877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:15.322247 containerd[1512]: time="2025-09-12T00:00:15.322041477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:15.442360 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 59204 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:15.444445 sshd-session[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:15.452997 systemd-logind[1488]: New session 8 of user core. Sep 12 00:00:15.464932 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 00:00:15.605618 sshd[3779]: Connection closed by 10.0.0.1 port 59204 Sep 12 00:00:15.607141 sshd-session[3777]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:15.616804 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:59204.service: Deactivated successfully. Sep 12 00:00:15.619155 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 00:00:15.619987 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Sep 12 00:00:15.621607 systemd-logind[1488]: Removed session 8. Sep 12 00:00:15.720007 containerd[1512]: time="2025-09-12T00:00:15.719893718Z" level=error msg="Failed to destroy network for sandbox \"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.721461 containerd[1512]: time="2025-09-12T00:00:15.721398919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.721855 kubelet[2630]: E0912 00:00:15.721652 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.721855 kubelet[2630]: E0912 00:00:15.721707 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:15.721855 kubelet[2630]: E0912 00:00:15.721728 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" Sep 12 00:00:15.722228 containerd[1512]: time="2025-09-12T00:00:15.722183919Z" level=error msg="Failed to destroy network for sandbox \"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.722685 kubelet[2630]: E0912 00:00:15.722314 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669f89fcc5-8lg6p_calico-system(3c075dbc-5473-4587-8e32-5346879edeb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669f89fcc5-8lg6p_calico-system(3c075dbc-5473-4587-8e32-5346879edeb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c86db4262d348458e4d75677697daa2881f0c8641b0b33ec642099dc40e6389c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" podUID="3c075dbc-5473-4587-8e32-5346879edeb3" Sep 12 00:00:15.725031 systemd[1]: run-netns-cni\x2d5136d5d5\x2d1f12\x2de5da\x2d29b8\x2dd020a1b3e42b.mount: Deactivated successfully. Sep 12 00:00:15.725152 containerd[1512]: time="2025-09-12T00:00:15.725050161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.727541 kubelet[2630]: E0912 00:00:15.727507 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:15.727827 kubelet[2630]: E0912 00:00:15.727658 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fv47h" Sep 12 00:00:15.727827 kubelet[2630]: E0912 00:00:15.727684 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fv47h" Sep 12 00:00:15.728188 systemd[1]: run-netns-cni\x2dc22df1d3\x2da91c\x2dc58b\x2dc2f6\x2dd1ae56e5dd6f.mount: Deactivated successfully. Sep 12 00:00:15.729440 kubelet[2630]: E0912 00:00:15.727737 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fv47h_calico-system(0cbcc7da-746e-4cfb-9c94-96d5f1400fdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fv47h_calico-system(0cbcc7da-746e-4cfb-9c94-96d5f1400fdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee73c8be2054650ba43ce0da5d9dc882905c62c796b7e677bf1cf5337e323b31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fv47h" podUID="0cbcc7da-746e-4cfb-9c94-96d5f1400fdf" Sep 12 00:00:16.062465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678056030.mount: Deactivated successfully. Sep 12 00:00:16.100682 containerd[1512]: time="2025-09-12T00:00:16.100618588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 00:00:16.120021 containerd[1512]: time="2025-09-12T00:00:16.119888277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 12.657566692s" Sep 12 00:00:16.120021 containerd[1512]: time="2025-09-12T00:00:16.119923797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 00:00:16.128263 containerd[1512]: time="2025-09-12T00:00:16.128198001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:16.129019 containerd[1512]: time="2025-09-12T00:00:16.128983241Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:16.129485 containerd[1512]: time="2025-09-12T00:00:16.129462881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:16.133467 containerd[1512]: time="2025-09-12T00:00:16.132983323Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 00:00:16.140640 containerd[1512]: time="2025-09-12T00:00:16.140609607Z" level=info msg="Container 4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:16.148943 containerd[1512]: time="2025-09-12T00:00:16.148892971Z" level=info msg="CreateContainer within sandbox \"736d982774ec107127b29dd15f4cfa43f25d7ddcd75d4dcb9c43fdfa0853532c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\"" Sep 12 00:00:16.150292 containerd[1512]: time="2025-09-12T00:00:16.150260211Z" level=info msg="StartContainer for \"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\"" Sep 12 00:00:16.151847 containerd[1512]: time="2025-09-12T00:00:16.151815732Z" level=info msg="connecting to shim 4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1" address="unix:///run/containerd/s/47ebc990d069aa280af0881b09e65a66541d983bc3274983ab696607d0060a28" protocol=ttrpc version=3 Sep 12 00:00:16.188910 systemd[1]: Started cri-containerd-4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1.scope - libcontainer container 4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1. Sep 12 00:00:16.224237 containerd[1512]: time="2025-09-12T00:00:16.224185206Z" level=info msg="StartContainer for \"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\" returns successfully" Sep 12 00:00:16.322082 containerd[1512]: time="2025-09-12T00:00:16.321977733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf794d5db-pmpnk,Uid:59115248-060d-4fd7-a923-109785bfe839,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:16.322400 containerd[1512]: time="2025-09-12T00:00:16.322248453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:16.341895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 00:00:16.342005 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 00:00:16.384963 containerd[1512]: time="2025-09-12T00:00:16.384799923Z" level=error msg="Failed to destroy network for sandbox \"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.387064 containerd[1512]: time="2025-09-12T00:00:16.386742844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.387927 kubelet[2630]: E0912 00:00:16.387887 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.387981 kubelet[2630]: E0912 00:00:16.387945 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:16.387981 kubelet[2630]: E0912 00:00:16.387965 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" Sep 12 00:00:16.388037 kubelet[2630]: E0912 00:00:16.388012 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cb9647-j6lg2_calico-apiserver(033b34fe-8d24-40ac-9f3c-8e88b05e828e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cb9647-j6lg2_calico-apiserver(033b34fe-8d24-40ac-9f3c-8e88b05e828e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ce489466ebe2b45e85f779559f01c2ed287e35832906f0c2f1170bc4f9e5354\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" podUID="033b34fe-8d24-40ac-9f3c-8e88b05e828e" Sep 12 00:00:16.401089 containerd[1512]: time="2025-09-12T00:00:16.401032410Z" level=error msg="Failed to destroy network for sandbox \"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.401946 containerd[1512]: time="2025-09-12T00:00:16.401905211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bf794d5db-pmpnk,Uid:59115248-060d-4fd7-a923-109785bfe839,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.402195 kubelet[2630]: E0912 00:00:16.402139 2630 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:00:16.402258 kubelet[2630]: E0912 00:00:16.402218 2630 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:16.402258 kubelet[2630]: E0912 00:00:16.402239 2630 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bf794d5db-pmpnk" Sep 12 00:00:16.402326 kubelet[2630]: E0912 00:00:16.402289 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bf794d5db-pmpnk_calico-system(59115248-060d-4fd7-a923-109785bfe839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bf794d5db-pmpnk_calico-system(59115248-060d-4fd7-a923-109785bfe839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab041aff25762ffe6899adb68920de02d829986c4c9305fcaed10419396296f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bf794d5db-pmpnk" podUID="59115248-060d-4fd7-a923-109785bfe839" Sep 12 00:00:16.499740 kubelet[2630]: I0912 00:00:16.499708 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:00:16.500072 kubelet[2630]: E0912 00:00:16.500054 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:16.550222 kubelet[2630]: I0912 00:00:16.550148 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-768q9" podStartSLOduration=0.98065192 podStartE2EDuration="22.550131001s" podCreationTimestamp="2025-09-11 23:59:54 +0000 UTC" firstStartedPulling="2025-09-11 23:59:54.551686436 +0000 UTC m=+21.331087271" lastFinishedPulling="2025-09-12 00:00:16.121165517 +0000 UTC m=+42.900566352" observedRunningTime="2025-09-12 00:00:16.550007361 +0000 UTC m=+43.329408196" watchObservedRunningTime="2025-09-12 00:00:16.550131001 +0000 UTC m=+43.329531836" Sep 12 00:00:16.584630 kubelet[2630]: I0912 00:00:16.584348 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59115248-060d-4fd7-a923-109785bfe839-whisker-ca-bundle\") pod \"59115248-060d-4fd7-a923-109785bfe839\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " Sep 12 00:00:16.584630 kubelet[2630]: I0912 00:00:16.584400 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf2zl\" (UniqueName: \"kubernetes.io/projected/59115248-060d-4fd7-a923-109785bfe839-kube-api-access-jf2zl\") pod \"59115248-060d-4fd7-a923-109785bfe839\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " Sep 12 00:00:16.584630 kubelet[2630]: I0912 00:00:16.584425 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59115248-060d-4fd7-a923-109785bfe839-whisker-backend-key-pair\") pod \"59115248-060d-4fd7-a923-109785bfe839\" (UID: \"59115248-060d-4fd7-a923-109785bfe839\") " Sep 12 00:00:16.590067 kubelet[2630]: I0912 00:00:16.590018 2630 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59115248-060d-4fd7-a923-109785bfe839-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "59115248-060d-4fd7-a923-109785bfe839" (UID: "59115248-060d-4fd7-a923-109785bfe839"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 00:00:16.596804 systemd[1]: var-lib-kubelet-pods-59115248\x2d060d\x2d4fd7\x2da923\x2d109785bfe839-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 00:00:16.599677 systemd[1]: var-lib-kubelet-pods-59115248\x2d060d\x2d4fd7\x2da923\x2d109785bfe839-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djf2zl.mount: Deactivated successfully. Sep 12 00:00:16.601272 kubelet[2630]: I0912 00:00:16.600373 2630 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59115248-060d-4fd7-a923-109785bfe839-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "59115248-060d-4fd7-a923-109785bfe839" (UID: "59115248-060d-4fd7-a923-109785bfe839"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 00:00:16.601272 kubelet[2630]: I0912 00:00:16.600379 2630 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59115248-060d-4fd7-a923-109785bfe839-kube-api-access-jf2zl" (OuterVolumeSpecName: "kube-api-access-jf2zl") pod "59115248-060d-4fd7-a923-109785bfe839" (UID: "59115248-060d-4fd7-a923-109785bfe839"). InnerVolumeSpecName "kube-api-access-jf2zl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 00:00:16.686137 kubelet[2630]: I0912 00:00:16.686093 2630 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59115248-060d-4fd7-a923-109785bfe839-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 00:00:16.686137 kubelet[2630]: I0912 00:00:16.686127 2630 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jf2zl\" (UniqueName: \"kubernetes.io/projected/59115248-060d-4fd7-a923-109785bfe839-kube-api-access-jf2zl\") on node \"localhost\" DevicePath \"\"" Sep 12 00:00:16.686137 kubelet[2630]: I0912 00:00:16.686138 2630 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59115248-060d-4fd7-a923-109785bfe839-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 00:00:16.702775 containerd[1512]: time="2025-09-12T00:00:16.702600834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\" id:\"6d9363d6e622c15a75f039b00e39da3bc766929fdb9874be66095c26d1ef864f\" pid:3991 exit_status:1 exited_at:{seconds:1757635216 nanos:702060793}" Sep 12 00:00:17.336075 systemd[1]: Removed slice kubepods-besteffort-pod59115248_060d_4fd7_a923_109785bfe839.slice - libcontainer container kubepods-besteffort-pod59115248_060d_4fd7_a923_109785bfe839.slice. Sep 12 00:00:17.505073 kubelet[2630]: E0912 00:00:17.503831 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:17.582839 systemd[1]: Created slice kubepods-besteffort-pod79fbb283_789e_4d71_93b2_9180563bccd6.slice - libcontainer container kubepods-besteffort-pod79fbb283_789e_4d71_93b2_9180563bccd6.slice. Sep 12 00:00:17.591877 kubelet[2630]: I0912 00:00:17.591617 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79fbb283-789e-4d71-93b2-9180563bccd6-whisker-ca-bundle\") pod \"whisker-6c4bf47dcc-j8lvh\" (UID: \"79fbb283-789e-4d71-93b2-9180563bccd6\") " pod="calico-system/whisker-6c4bf47dcc-j8lvh" Sep 12 00:00:17.591877 kubelet[2630]: I0912 00:00:17.591663 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn465\" (UniqueName: \"kubernetes.io/projected/79fbb283-789e-4d71-93b2-9180563bccd6-kube-api-access-wn465\") pod \"whisker-6c4bf47dcc-j8lvh\" (UID: \"79fbb283-789e-4d71-93b2-9180563bccd6\") " pod="calico-system/whisker-6c4bf47dcc-j8lvh" Sep 12 00:00:17.591877 kubelet[2630]: I0912 00:00:17.591732 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79fbb283-789e-4d71-93b2-9180563bccd6-whisker-backend-key-pair\") pod \"whisker-6c4bf47dcc-j8lvh\" (UID: \"79fbb283-789e-4d71-93b2-9180563bccd6\") " pod="calico-system/whisker-6c4bf47dcc-j8lvh" Sep 12 00:00:17.627499 containerd[1512]: time="2025-09-12T00:00:17.627421414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\" id:\"2d5f87feff9d8f2bb3a2644cd6e1102d1f5e2c813e70dd1b65f6a9c9acb65997\" pid:4026 exit_status:1 exited_at:{seconds:1757635217 nanos:627125734}" Sep 12 00:00:17.888558 containerd[1512]: time="2025-09-12T00:00:17.888520770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4bf47dcc-j8lvh,Uid:79fbb283-789e-4d71-93b2-9180563bccd6,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:18.092600 systemd-networkd[1427]: cali418290403fd: Link UP Sep 12 00:00:18.093589 systemd-networkd[1427]: cali418290403fd: Gained carrier Sep 12 00:00:18.100471 systemd-networkd[1427]: vxlan.calico: Link UP Sep 12 00:00:18.100476 systemd-networkd[1427]: vxlan.calico: Gained carrier Sep 12 00:00:18.113341 containerd[1512]: 2025-09-12 00:00:17.959 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0 whisker-6c4bf47dcc- calico-system 79fbb283-789e-4d71-93b2-9180563bccd6 962 0 2025-09-12 00:00:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c4bf47dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c4bf47dcc-j8lvh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali418290403fd [] [] }} ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-" Sep 12 00:00:18.113341 containerd[1512]: 2025-09-12 00:00:17.959 [INFO][4171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113341 containerd[1512]: 2025-09-12 00:00:18.037 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" HandleID="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Workload="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.037 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" HandleID="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Workload="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c4bf47dcc-j8lvh", "timestamp":"2025-09-12 00:00:18.037065835 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.037 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.037 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.037 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.048 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" host="localhost" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.057 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.061 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.063 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.067 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.113518 containerd[1512]: 2025-09-12 00:00:18.067 [INFO][4186] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" host="localhost" Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.069 [INFO][4186] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85 Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.074 [INFO][4186] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" host="localhost" Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.081 [INFO][4186] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" host="localhost" Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.081 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" host="localhost" Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.081 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:18.113717 containerd[1512]: 2025-09-12 00:00:18.081 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" HandleID="k8s-pod-network.ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Workload="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113851 containerd[1512]: 2025-09-12 00:00:18.085 [INFO][4171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0", GenerateName:"whisker-6c4bf47dcc-", Namespace:"calico-system", SelfLink:"", UID:"79fbb283-789e-4d71-93b2-9180563bccd6", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4bf47dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c4bf47dcc-j8lvh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali418290403fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.113851 containerd[1512]: 2025-09-12 00:00:18.085 [INFO][4171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113917 containerd[1512]: 2025-09-12 00:00:18.085 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali418290403fd ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113917 containerd[1512]: 2025-09-12 00:00:18.093 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.113954 containerd[1512]: 2025-09-12 00:00:18.094 [INFO][4171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0", GenerateName:"whisker-6c4bf47dcc-", Namespace:"calico-system", SelfLink:"", UID:"79fbb283-789e-4d71-93b2-9180563bccd6", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4bf47dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85", Pod:"whisker-6c4bf47dcc-j8lvh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali418290403fd", MAC:"ee:8c:de:69:f1:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.114004 containerd[1512]: 2025-09-12 00:00:18.108 [INFO][4171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" Namespace="calico-system" Pod="whisker-6c4bf47dcc-j8lvh" WorkloadEndpoint="localhost-k8s-whisker--6c4bf47dcc--j8lvh-eth0" Sep 12 00:00:18.196091 containerd[1512]: time="2025-09-12T00:00:18.195915942Z" level=info msg="connecting to shim ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85" address="unix:///run/containerd/s/b0cb58d9b8a2064fb27dd39d1c2b2ece9fba662227cbae50b0b710d8624da225" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:18.238086 systemd[1]: Started cri-containerd-ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85.scope - libcontainer container ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85. Sep 12 00:00:18.252426 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:18.287546 containerd[1512]: time="2025-09-12T00:00:18.287494460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4bf47dcc-j8lvh,Uid:79fbb283-789e-4d71-93b2-9180563bccd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85\"" Sep 12 00:00:18.289866 containerd[1512]: time="2025-09-12T00:00:18.289830101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 00:00:18.321659 kubelet[2630]: E0912 00:00:18.321617 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:18.321822 kubelet[2630]: E0912 00:00:18.321684 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:18.322324 containerd[1512]: time="2025-09-12T00:00:18.322292234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clc9x,Uid:3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd,Namespace:kube-system,Attempt:0,}" Sep 12 00:00:18.322507 containerd[1512]: time="2025-09-12T00:00:18.322332994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbffs,Uid:093381d9-e75d-4201-91b6-3a2eb516857c,Namespace:kube-system,Attempt:0,}" Sep 12 00:00:18.458422 systemd-networkd[1427]: calif9c1d52ebc0: Link UP Sep 12 00:00:18.459782 systemd-networkd[1427]: calif9c1d52ebc0: Gained carrier Sep 12 00:00:18.475866 containerd[1512]: 2025-09-12 00:00:18.368 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nbffs-eth0 coredns-674b8bbfcf- kube-system 093381d9-e75d-4201-91b6-3a2eb516857c 824 0 2025-09-11 23:59:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nbffs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9c1d52ebc0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-" Sep 12 00:00:18.475866 containerd[1512]: 2025-09-12 00:00:18.368 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.475866 containerd[1512]: 2025-09-12 00:00:18.394 [INFO][4349] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" HandleID="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Workload="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.394 [INFO][4349] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" HandleID="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Workload="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nbffs", "timestamp":"2025-09-12 00:00:18.394832185 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.395 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.395 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.395 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.406 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" host="localhost" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.415 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.431 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.437 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.440 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.476132 containerd[1512]: 2025-09-12 00:00:18.440 [INFO][4349] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" host="localhost" Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.442 [INFO][4349] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903 Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.446 [INFO][4349] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" host="localhost" Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.450 [INFO][4349] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" host="localhost" Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.451 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" host="localhost" Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.451 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:18.476394 containerd[1512]: 2025-09-12 00:00:18.451 [INFO][4349] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" HandleID="k8s-pod-network.aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Workload="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.476504 containerd[1512]: 2025-09-12 00:00:18.453 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nbffs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"093381d9-e75d-4201-91b6-3a2eb516857c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nbffs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9c1d52ebc0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.476568 containerd[1512]: 2025-09-12 00:00:18.453 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.476568 containerd[1512]: 2025-09-12 00:00:18.453 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9c1d52ebc0 ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.476568 containerd[1512]: 2025-09-12 00:00:18.458 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.476628 containerd[1512]: 2025-09-12 00:00:18.462 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nbffs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"093381d9-e75d-4201-91b6-3a2eb516857c", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903", Pod:"coredns-674b8bbfcf-nbffs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9c1d52ebc0", MAC:"32:ad:b7:e9:8e:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.476628 containerd[1512]: 2025-09-12 00:00:18.472 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" Namespace="kube-system" Pod="coredns-674b8bbfcf-nbffs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nbffs-eth0" Sep 12 00:00:18.560725 systemd-networkd[1427]: cali35a8a5f2b22: Link UP Sep 12 00:00:18.560900 systemd-networkd[1427]: cali35a8a5f2b22: Gained carrier Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.378 [INFO][4315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--clc9x-eth0 coredns-674b8bbfcf- kube-system 3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd 829 0 2025-09-11 23:59:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-clc9x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35a8a5f2b22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.378 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.411 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" HandleID="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Workload="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.411 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" HandleID="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Workload="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b8130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-clc9x", "timestamp":"2025-09-12 00:00:18.411434672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.411 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.451 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.451 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.507 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.514 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.522 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.524 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.526 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.526 [INFO][4355] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.527 [INFO][4355] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.537 [INFO][4355] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.554 [INFO][4355] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.554 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" host="localhost" Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.554 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:18.577079 containerd[1512]: 2025-09-12 00:00:18.554 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" HandleID="k8s-pod-network.9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Workload="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.558 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--clc9x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-clc9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35a8a5f2b22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.558 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.558 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35a8a5f2b22 ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.560 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.561 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--clc9x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a", Pod:"coredns-674b8bbfcf-clc9x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35a8a5f2b22", MAC:"ce:8f:9f:9a:af:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:18.578057 containerd[1512]: 2025-09-12 00:00:18.571 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-clc9x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--clc9x-eth0" Sep 12 00:00:18.581947 containerd[1512]: time="2025-09-12T00:00:18.581817543Z" level=info msg="connecting to shim aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903" address="unix:///run/containerd/s/7c3fe81d3e66ff15282a5ea41a844aafffab76cc48ac1c4b097125bb79c6c960" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:18.596442 containerd[1512]: time="2025-09-12T00:00:18.596387509Z" level=info msg="connecting to shim 9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a" address="unix:///run/containerd/s/319b326020c0f129b652fa49c5ce3d989c2692abfddd2adac321802bd28e2c70" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:18.616935 systemd[1]: Started cri-containerd-aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903.scope - libcontainer container aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903. Sep 12 00:00:18.621353 systemd[1]: Started cri-containerd-9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a.scope - libcontainer container 9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a. Sep 12 00:00:18.631637 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:18.634316 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:18.656296 containerd[1512]: time="2025-09-12T00:00:18.656241494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nbffs,Uid:093381d9-e75d-4201-91b6-3a2eb516857c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903\"" Sep 12 00:00:18.657364 kubelet[2630]: E0912 00:00:18.657337 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:18.662496 containerd[1512]: time="2025-09-12T00:00:18.662458456Z" level=info msg="CreateContainer within sandbox \"aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:00:18.671966 containerd[1512]: time="2025-09-12T00:00:18.671931380Z" level=info msg="Container eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:18.672230 containerd[1512]: time="2025-09-12T00:00:18.672200420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clc9x,Uid:3bbd14ad-e4c3-4bf0-abfe-08d2fa32acfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a\"" Sep 12 00:00:18.673079 kubelet[2630]: E0912 00:00:18.673053 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:18.679260 containerd[1512]: time="2025-09-12T00:00:18.679220863Z" level=info msg="CreateContainer within sandbox \"9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:00:18.687979 containerd[1512]: time="2025-09-12T00:00:18.687942107Z" level=info msg="CreateContainer within sandbox \"aea9dc32ebad6e1654741103bdc98182631dc6c6e204ad8f6eae02f7e5be8903\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8\"" Sep 12 00:00:18.688510 containerd[1512]: time="2025-09-12T00:00:18.688444267Z" level=info msg="StartContainer for \"eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8\"" Sep 12 00:00:18.689326 containerd[1512]: time="2025-09-12T00:00:18.689285307Z" level=info msg="connecting to shim eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8" address="unix:///run/containerd/s/7c3fe81d3e66ff15282a5ea41a844aafffab76cc48ac1c4b097125bb79c6c960" protocol=ttrpc version=3 Sep 12 00:00:18.691768 containerd[1512]: time="2025-09-12T00:00:18.691458828Z" level=info msg="Container 8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:18.701113 containerd[1512]: time="2025-09-12T00:00:18.701078112Z" level=info msg="CreateContainer within sandbox \"9b0087ab21124ecc5f4cc4aa7ea9078f128187d4bbd1aa4a3b99ed7c225bee2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588\"" Sep 12 00:00:18.704818 containerd[1512]: time="2025-09-12T00:00:18.704588754Z" level=info msg="StartContainer for \"8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588\"" Sep 12 00:00:18.706947 containerd[1512]: time="2025-09-12T00:00:18.706915955Z" level=info msg="connecting to shim 8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588" address="unix:///run/containerd/s/319b326020c0f129b652fa49c5ce3d989c2692abfddd2adac321802bd28e2c70" protocol=ttrpc version=3 Sep 12 00:00:18.724932 systemd[1]: Started cri-containerd-eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8.scope - libcontainer container eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8. Sep 12 00:00:18.728203 systemd[1]: Started cri-containerd-8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588.scope - libcontainer container 8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588. Sep 12 00:00:18.766309 containerd[1512]: time="2025-09-12T00:00:18.765925819Z" level=info msg="StartContainer for \"8f172013a261f09b6f5122826b0498502db013f6049e75bd3fd4b4fb3243c588\" returns successfully" Sep 12 00:00:18.767156 containerd[1512]: time="2025-09-12T00:00:18.767109060Z" level=info msg="StartContainer for \"eb15bb4e57015247e319b0eaa88fa5f3ed22ccb376cc06bfaf7f76ef8b463db8\" returns successfully" Sep 12 00:00:19.323742 kubelet[2630]: I0912 00:00:19.323705 2630 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59115248-060d-4fd7-a923-109785bfe839" path="/var/lib/kubelet/pods/59115248-060d-4fd7-a923-109785bfe839/volumes" Sep 12 00:00:19.509625 kubelet[2630]: E0912 00:00:19.509578 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:19.516700 kubelet[2630]: E0912 00:00:19.516666 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:19.521541 kubelet[2630]: I0912 00:00:19.521482 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-clc9x" podStartSLOduration=38.521467001 podStartE2EDuration="38.521467001s" podCreationTimestamp="2025-09-11 23:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:00:19.519861641 +0000 UTC m=+46.299262476" watchObservedRunningTime="2025-09-12 00:00:19.521467001 +0000 UTC m=+46.300867836" Sep 12 00:00:19.548115 kubelet[2630]: I0912 00:00:19.548048 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nbffs" podStartSLOduration=38.548030412 podStartE2EDuration="38.548030412s" podCreationTimestamp="2025-09-11 23:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:00:19.547189771 +0000 UTC m=+46.326590606" watchObservedRunningTime="2025-09-12 00:00:19.548030412 +0000 UTC m=+46.327431247" Sep 12 00:00:19.743899 systemd-networkd[1427]: cali418290403fd: Gained IPv6LL Sep 12 00:00:20.063890 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Sep 12 00:00:20.256040 systemd-networkd[1427]: calif9c1d52ebc0: Gained IPv6LL Sep 12 00:00:20.514965 kubelet[2630]: E0912 00:00:20.514895 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:20.515528 kubelet[2630]: E0912 00:00:20.515009 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:20.575966 systemd-networkd[1427]: cali35a8a5f2b22: Gained IPv6LL Sep 12 00:00:20.629998 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:33610.service - OpenSSH per-connection server daemon (10.0.0.1:33610). Sep 12 00:00:20.681404 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 33610 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:20.683341 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:20.688820 systemd-logind[1488]: New session 9 of user core. Sep 12 00:00:20.696942 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 00:00:20.881109 sshd[4559]: Connection closed by 10.0.0.1 port 33610 Sep 12 00:00:20.881870 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:20.887090 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:33610.service: Deactivated successfully. Sep 12 00:00:20.887125 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Sep 12 00:00:20.890573 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 00:00:20.893955 systemd-logind[1488]: Removed session 9. Sep 12 00:00:21.516021 kubelet[2630]: E0912 00:00:21.515985 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:21.516657 kubelet[2630]: E0912 00:00:21.516634 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:25.323050 containerd[1512]: time="2025-09-12T00:00:25.323006213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:25.446424 systemd-networkd[1427]: cali98c7b58ede8: Link UP Sep 12 00:00:25.447063 systemd-networkd[1427]: cali98c7b58ede8: Gained carrier Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.368 [INFO][4582] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0 calico-apiserver-948cb9647- calico-apiserver d1a33081-1e30-4a18-baff-a8e2bfa85db2 830 0 2025-09-11 23:59:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948cb9647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-948cb9647-w47mh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali98c7b58ede8 [] [] }} ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.368 [INFO][4582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.395 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" HandleID="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Workload="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.395 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" HandleID="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Workload="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-948cb9647-w47mh", "timestamp":"2025-09-12 00:00:25.395628272 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.395 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.395 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.395 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.409 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.414 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.419 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.422 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.425 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.425 [INFO][4600] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.428 [INFO][4600] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577 Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.436 [INFO][4600] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.441 [INFO][4600] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.441 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" host="localhost" Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.441 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:25.468127 containerd[1512]: 2025-09-12 00:00:25.441 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" HandleID="k8s-pod-network.96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Workload="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.444 [INFO][4582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0", GenerateName:"calico-apiserver-948cb9647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1a33081-1e30-4a18-baff-a8e2bfa85db2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cb9647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-948cb9647-w47mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98c7b58ede8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.444 [INFO][4582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.444 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98c7b58ede8 ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.447 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.448 [INFO][4582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0", GenerateName:"calico-apiserver-948cb9647-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1a33081-1e30-4a18-baff-a8e2bfa85db2", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cb9647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577", Pod:"calico-apiserver-948cb9647-w47mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98c7b58ede8", MAC:"62:ce:84:a4:ef:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:25.468637 containerd[1512]: 2025-09-12 00:00:25.464 [INFO][4582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-w47mh" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--w47mh-eth0" Sep 12 00:00:25.504430 containerd[1512]: time="2025-09-12T00:00:25.504385581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:25.505036 containerd[1512]: time="2025-09-12T00:00:25.504907101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 00:00:25.505717 containerd[1512]: time="2025-09-12T00:00:25.505683301Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:25.507581 containerd[1512]: time="2025-09-12T00:00:25.507501782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:25.508590 containerd[1512]: time="2025-09-12T00:00:25.508559502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 7.218692601s" Sep 12 00:00:25.508682 containerd[1512]: time="2025-09-12T00:00:25.508596142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 00:00:25.512893 containerd[1512]: time="2025-09-12T00:00:25.512863663Z" level=info msg="CreateContainer within sandbox \"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 00:00:25.519932 containerd[1512]: time="2025-09-12T00:00:25.519887985Z" level=info msg="connecting to shim 96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577" address="unix:///run/containerd/s/d963530038bf8026e436426d176be4710a4adc9579e5d9f97c8924069e0daa72" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:25.522726 containerd[1512]: time="2025-09-12T00:00:25.521897066Z" level=info msg="Container 817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:25.524084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388954990.mount: Deactivated successfully. Sep 12 00:00:25.529129 containerd[1512]: time="2025-09-12T00:00:25.529094587Z" level=info msg="CreateContainer within sandbox \"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9\"" Sep 12 00:00:25.530332 containerd[1512]: time="2025-09-12T00:00:25.530279828Z" level=info msg="StartContainer for \"817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9\"" Sep 12 00:00:25.532051 containerd[1512]: time="2025-09-12T00:00:25.532017428Z" level=info msg="connecting to shim 817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9" address="unix:///run/containerd/s/b0cb58d9b8a2064fb27dd39d1c2b2ece9fba662227cbae50b0b710d8624da225" protocol=ttrpc version=3 Sep 12 00:00:25.567927 systemd[1]: Started cri-containerd-817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9.scope - libcontainer container 817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9. Sep 12 00:00:25.569294 systemd[1]: Started cri-containerd-96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577.scope - libcontainer container 96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577. Sep 12 00:00:25.582779 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:25.614889 containerd[1512]: time="2025-09-12T00:00:25.614842290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-w47mh,Uid:d1a33081-1e30-4a18-baff-a8e2bfa85db2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577\"" Sep 12 00:00:25.616608 containerd[1512]: time="2025-09-12T00:00:25.616529771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:00:25.627860 containerd[1512]: time="2025-09-12T00:00:25.627824334Z" level=info msg="StartContainer for \"817d9178477e4504915d1732a3080701455b7687b0bc9eef3d8a6d4beb5946e9\" returns successfully" Sep 12 00:00:25.900233 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:33622.service - OpenSSH per-connection server daemon (10.0.0.1:33622). Sep 12 00:00:25.971845 sshd[4701]: Accepted publickey for core from 10.0.0.1 port 33622 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:25.973301 sshd-session[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:25.977914 systemd-logind[1488]: New session 10 of user core. Sep 12 00:00:25.989937 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 00:00:26.133721 sshd[4703]: Connection closed by 10.0.0.1 port 33622 Sep 12 00:00:26.135208 sshd-session[4701]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:26.139612 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Sep 12 00:00:26.139868 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:33622.service: Deactivated successfully. Sep 12 00:00:26.141673 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 00:00:26.143872 systemd-logind[1488]: Removed session 10. Sep 12 00:00:26.719936 systemd-networkd[1427]: cali98c7b58ede8: Gained IPv6LL Sep 12 00:00:28.322318 containerd[1512]: time="2025-09-12T00:00:28.322264265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:00:28.322671 containerd[1512]: time="2025-09-12T00:00:28.322386266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:28.322671 containerd[1512]: time="2025-09-12T00:00:28.322266225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:28.451257 systemd-networkd[1427]: cali7acc2581f3e: Link UP Sep 12 00:00:28.451657 systemd-networkd[1427]: cali7acc2581f3e: Gained carrier Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.375 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0 calico-apiserver-948cb9647- calico-apiserver 033b34fe-8d24-40ac-9f3c-8e88b05e828e 831 0 2025-09-11 23:59:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948cb9647 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-948cb9647-j6lg2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7acc2581f3e [] [] }} ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.375 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.403 [INFO][4769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" HandleID="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Workload="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.403 [INFO][4769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" HandleID="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Workload="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035ccf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-948cb9647-j6lg2", "timestamp":"2025-09-12 00:00:28.403727203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.403 [INFO][4769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.403 [INFO][4769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.404 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.413 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.417 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.422 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.428 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.431 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.431 [INFO][4769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.432 [INFO][4769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.437 [INFO][4769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" host="localhost" Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:28.468653 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" HandleID="k8s-pod-network.4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Workload="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.448 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0", GenerateName:"calico-apiserver-948cb9647-", Namespace:"calico-apiserver", SelfLink:"", UID:"033b34fe-8d24-40ac-9f3c-8e88b05e828e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cb9647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-948cb9647-j6lg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7acc2581f3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.448 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.448 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7acc2581f3e ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.452 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.452 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0", GenerateName:"calico-apiserver-948cb9647-", Namespace:"calico-apiserver", SelfLink:"", UID:"033b34fe-8d24-40ac-9f3c-8e88b05e828e", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cb9647", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c", Pod:"calico-apiserver-948cb9647-j6lg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7acc2581f3e", MAC:"8a:60:b3:c6:c4:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.469409 containerd[1512]: 2025-09-12 00:00:28.463 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" Namespace="calico-apiserver" Pod="calico-apiserver-948cb9647-j6lg2" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cb9647--j6lg2-eth0" Sep 12 00:00:28.506240 containerd[1512]: time="2025-09-12T00:00:28.506202586Z" level=info msg="connecting to shim 4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c" address="unix:///run/containerd/s/ba2669f38282f96162a7aba4c50682ca632e678e9d3ef21cc68e679b77009da9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:28.542642 systemd[1]: Started cri-containerd-4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c.scope - libcontainer container 4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c. Sep 12 00:00:28.551490 systemd-networkd[1427]: cali241a9cb2a2f: Link UP Sep 12 00:00:28.552113 systemd-networkd[1427]: cali241a9cb2a2f: Gained carrier Sep 12 00:00:28.565293 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.381 [INFO][4739] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--g56ln-eth0 goldmane-54d579b49d- calico-system 743f1d9d-1a8f-4684-bf04-5f3c17d51801 828 0 2025-09-11 23:59:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-g56ln eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali241a9cb2a2f [] [] }} ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.382 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.406 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" HandleID="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Workload="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.407 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" HandleID="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Workload="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-g56ln", "timestamp":"2025-09-12 00:00:28.406895364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.407 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.442 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.515 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.521 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.525 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.527 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.530 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.530 [INFO][4777] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.531 [INFO][4777] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.536 [INFO][4777] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.543 [INFO][4777] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.544 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" host="localhost" Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.544 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:28.574701 containerd[1512]: 2025-09-12 00:00:28.544 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" HandleID="k8s-pod-network.eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Workload="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.547 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--g56ln-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"743f1d9d-1a8f-4684-bf04-5f3c17d51801", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-g56ln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali241a9cb2a2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.548 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.548 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali241a9cb2a2f ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.553 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.556 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--g56ln-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"743f1d9d-1a8f-4684-bf04-5f3c17d51801", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe", Pod:"goldmane-54d579b49d-g56ln", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali241a9cb2a2f", MAC:"26:60:47:8b:99:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.575242 containerd[1512]: 2025-09-12 00:00:28.570 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" Namespace="calico-system" Pod="goldmane-54d579b49d-g56ln" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g56ln-eth0" Sep 12 00:00:28.594063 containerd[1512]: time="2025-09-12T00:00:28.593856965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cb9647-j6lg2,Uid:033b34fe-8d24-40ac-9f3c-8e88b05e828e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c\"" Sep 12 00:00:28.595327 containerd[1512]: time="2025-09-12T00:00:28.595291045Z" level=info msg="connecting to shim eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe" address="unix:///run/containerd/s/1c8a6b998ae596aef00380cb55039ab29f3abbed8edb5e02ffa68033e0c5798c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:28.620917 systemd[1]: Started cri-containerd-eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe.scope - libcontainer container eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe. Sep 12 00:00:28.640887 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:28.661117 systemd-networkd[1427]: cali9322adb1e04: Link UP Sep 12 00:00:28.661910 systemd-networkd[1427]: cali9322adb1e04: Gained carrier Sep 12 00:00:28.681390 containerd[1512]: time="2025-09-12T00:00:28.681347744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g56ln,Uid:743f1d9d-1a8f-4684-bf04-5f3c17d51801,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe\"" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.380 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0 calico-kube-controllers-669f89fcc5- calico-system 3c075dbc-5473-4587-8e32-5346879edeb3 832 0 2025-09-11 23:59:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669f89fcc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-669f89fcc5-8lg6p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9322adb1e04 [] [] }} ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.381 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.416 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" HandleID="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Workload="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.416 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" HandleID="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Workload="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-669f89fcc5-8lg6p", "timestamp":"2025-09-12 00:00:28.416206166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.416 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.544 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.544 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.616 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.625 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.630 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.633 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.637 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.637 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.639 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.644 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.653 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.653 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" host="localhost" Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.653 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:28.682196 containerd[1512]: 2025-09-12 00:00:28.653 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" HandleID="k8s-pod-network.06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Workload="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.657 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0", GenerateName:"calico-kube-controllers-669f89fcc5-", Namespace:"calico-system", SelfLink:"", UID:"3c075dbc-5473-4587-8e32-5346879edeb3", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669f89fcc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-669f89fcc5-8lg6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9322adb1e04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.657 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.657 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9322adb1e04 ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.662 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.662 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0", GenerateName:"calico-kube-controllers-669f89fcc5-", Namespace:"calico-system", SelfLink:"", UID:"3c075dbc-5473-4587-8e32-5346879edeb3", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669f89fcc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e", Pod:"calico-kube-controllers-669f89fcc5-8lg6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9322adb1e04", MAC:"96:46:93:77:67:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:28.682629 containerd[1512]: 2025-09-12 00:00:28.678 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" Namespace="calico-system" Pod="calico-kube-controllers-669f89fcc5-8lg6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669f89fcc5--8lg6p-eth0" Sep 12 00:00:28.704345 containerd[1512]: time="2025-09-12T00:00:28.704306509Z" level=info msg="connecting to shim 06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e" address="unix:///run/containerd/s/72e87ebaa10240736c6a7c8f7a9b8e7e918891703cefa3195a1b40d8123d81a5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:28.733025 systemd[1]: Started cri-containerd-06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e.scope - libcontainer container 06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e. Sep 12 00:00:28.744770 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:28.765021 containerd[1512]: time="2025-09-12T00:00:28.764984042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669f89fcc5-8lg6p,Uid:3c075dbc-5473-4587-8e32-5346879edeb3,Namespace:calico-system,Attempt:0,} returns sandbox id \"06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e\"" Sep 12 00:00:29.323691 containerd[1512]: time="2025-09-12T00:00:29.323619000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,}" Sep 12 00:00:29.431796 systemd-networkd[1427]: cali2fea9e5ea5e: Link UP Sep 12 00:00:29.432122 systemd-networkd[1427]: cali2fea9e5ea5e: Gained carrier Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.366 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fv47h-eth0 csi-node-driver- calico-system 0cbcc7da-746e-4cfb-9c94-96d5f1400fdf 724 0 2025-09-11 23:59:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fv47h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2fea9e5ea5e [] [] }} ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.366 [INFO][4962] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.390 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" HandleID="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Workload="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.390 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" HandleID="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Workload="localhost-k8s-csi--node--driver--fv47h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fv47h", "timestamp":"2025-09-12 00:00:29.390842534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.391 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.391 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.391 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.401 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.406 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.410 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.411 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.413 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.413 [INFO][4976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.415 [INFO][4976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.418 [INFO][4976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.424 [INFO][4976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.424 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" host="localhost" Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.424 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:00:29.444460 containerd[1512]: 2025-09-12 00:00:29.424 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" HandleID="k8s-pod-network.938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Workload="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.428 [INFO][4962] cni-plugin/k8s.go 418: Populated endpoint ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fv47h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fv47h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fea9e5ea5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.428 [INFO][4962] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.428 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fea9e5ea5e ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.432 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.432 [INFO][4962] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fv47h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cbcc7da-746e-4cfb-9c94-96d5f1400fdf", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca", Pod:"csi-node-driver-fv47h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fea9e5ea5e", MAC:"7e:9c:7c:94:26:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:00:29.444976 containerd[1512]: 2025-09-12 00:00:29.440 [INFO][4962] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" Namespace="calico-system" Pod="csi-node-driver-fv47h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fv47h-eth0" Sep 12 00:00:29.464070 containerd[1512]: time="2025-09-12T00:00:29.464025789Z" level=info msg="connecting to shim 938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca" address="unix:///run/containerd/s/f7ce406a5233a6bf55b52f3707c55543d8e8b9bdfae832f7d524818186892721" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:00:29.485916 systemd[1]: Started cri-containerd-938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca.scope - libcontainer container 938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca. Sep 12 00:00:29.495466 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:00:29.507493 containerd[1512]: time="2025-09-12T00:00:29.507460478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fv47h,Uid:0cbcc7da-746e-4cfb-9c94-96d5f1400fdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca\"" Sep 12 00:00:30.431931 systemd-networkd[1427]: cali7acc2581f3e: Gained IPv6LL Sep 12 00:00:30.495927 systemd-networkd[1427]: cali241a9cb2a2f: Gained IPv6LL Sep 12 00:00:30.559861 systemd-networkd[1427]: cali9322adb1e04: Gained IPv6LL Sep 12 00:00:30.879883 systemd-networkd[1427]: cali2fea9e5ea5e: Gained IPv6LL Sep 12 00:00:31.152574 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Sep 12 00:00:31.213944 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:31.215552 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:31.221543 systemd-logind[1488]: New session 11 of user core. Sep 12 00:00:31.231016 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 00:00:31.376841 sshd[5040]: Connection closed by 10.0.0.1 port 34784 Sep 12 00:00:31.377192 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:31.391148 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:34784.service: Deactivated successfully. Sep 12 00:00:31.394370 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 00:00:31.395793 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Sep 12 00:00:31.398483 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Sep 12 00:00:31.400234 systemd-logind[1488]: Removed session 11. Sep 12 00:00:31.459769 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:31.461135 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:31.465848 systemd-logind[1488]: New session 12 of user core. Sep 12 00:00:31.473908 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 00:00:31.656673 sshd[5057]: Connection closed by 10.0.0.1 port 34794 Sep 12 00:00:31.656981 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:31.666064 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:34794.service: Deactivated successfully. Sep 12 00:00:31.668595 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 00:00:31.672824 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Sep 12 00:00:31.678494 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:34810.service - OpenSSH per-connection server daemon (10.0.0.1:34810). Sep 12 00:00:31.679791 systemd-logind[1488]: Removed session 12. Sep 12 00:00:31.735803 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 34810 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:31.737862 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:31.743808 systemd-logind[1488]: New session 13 of user core. Sep 12 00:00:31.748935 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 00:00:31.897038 sshd[5071]: Connection closed by 10.0.0.1 port 34810 Sep 12 00:00:31.897547 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:31.901091 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:34810.service: Deactivated successfully. Sep 12 00:00:31.903329 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 00:00:31.904246 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Sep 12 00:00:31.905822 systemd-logind[1488]: Removed session 13. Sep 12 00:00:36.912632 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). Sep 12 00:00:36.970315 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:36.971567 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:36.976803 systemd-logind[1488]: New session 14 of user core. Sep 12 00:00:36.983931 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 00:00:37.105645 sshd[5094]: Connection closed by 10.0.0.1 port 34816 Sep 12 00:00:37.104791 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:37.107676 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:34816.service: Deactivated successfully. Sep 12 00:00:37.109498 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 00:00:37.110824 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Sep 12 00:00:37.111951 systemd-logind[1488]: Removed session 14. Sep 12 00:00:39.804617 containerd[1512]: time="2025-09-12T00:00:39.804566302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:39.805097 containerd[1512]: time="2025-09-12T00:00:39.805008222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 00:00:39.805901 containerd[1512]: time="2025-09-12T00:00:39.805876942Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:39.807845 containerd[1512]: time="2025-09-12T00:00:39.807796542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:39.808610 containerd[1512]: time="2025-09-12T00:00:39.808498382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 14.191880891s" Sep 12 00:00:39.808610 containerd[1512]: time="2025-09-12T00:00:39.808526022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 00:00:39.809783 containerd[1512]: time="2025-09-12T00:00:39.809739102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 00:00:39.813486 containerd[1512]: time="2025-09-12T00:00:39.813454822Z" level=info msg="CreateContainer within sandbox \"96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:00:39.821592 containerd[1512]: time="2025-09-12T00:00:39.820890983Z" level=info msg="Container 5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:39.828395 containerd[1512]: time="2025-09-12T00:00:39.828307024Z" level=info msg="CreateContainer within sandbox \"96244131f49008c690ce92a6c1ff095d8357f220268451b811dc24cca7341577\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5\"" Sep 12 00:00:39.829221 containerd[1512]: time="2025-09-12T00:00:39.829193944Z" level=info msg="StartContainer for \"5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5\"" Sep 12 00:00:39.831359 containerd[1512]: time="2025-09-12T00:00:39.831327704Z" level=info msg="connecting to shim 5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5" address="unix:///run/containerd/s/d963530038bf8026e436426d176be4710a4adc9579e5d9f97c8924069e0daa72" protocol=ttrpc version=3 Sep 12 00:00:39.851902 systemd[1]: Started cri-containerd-5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5.scope - libcontainer container 5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5. Sep 12 00:00:39.994937 containerd[1512]: time="2025-09-12T00:00:39.994902042Z" level=info msg="StartContainer for \"5778f4e369eb089b5f24a4a67d3c5c36e732de1bdeb255de1f8f38885b9f99f5\" returns successfully" Sep 12 00:00:40.602639 kubelet[2630]: I0912 00:00:40.602527 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948cb9647-w47mh" podStartSLOduration=36.409151372 podStartE2EDuration="50.602511223s" podCreationTimestamp="2025-09-11 23:59:50 +0000 UTC" firstStartedPulling="2025-09-12 00:00:25.616281891 +0000 UTC m=+52.395682686" lastFinishedPulling="2025-09-12 00:00:39.809641702 +0000 UTC m=+66.589042537" observedRunningTime="2025-09-12 00:00:40.600624063 +0000 UTC m=+67.380024898" watchObservedRunningTime="2025-09-12 00:00:40.602511223 +0000 UTC m=+67.381912058" Sep 12 00:00:42.116608 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:36076.service - OpenSSH per-connection server daemon (10.0.0.1:36076). Sep 12 00:00:42.166507 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 36076 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:42.167998 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:42.172818 systemd-logind[1488]: New session 15 of user core. Sep 12 00:00:42.178937 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 00:00:42.370052 sshd[5166]: Connection closed by 10.0.0.1 port 36076 Sep 12 00:00:42.370928 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:42.374492 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:36076.service: Deactivated successfully. Sep 12 00:00:42.376310 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 00:00:42.377141 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Sep 12 00:00:42.378158 systemd-logind[1488]: Removed session 15. Sep 12 00:00:44.292809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246895836.mount: Deactivated successfully. Sep 12 00:00:44.308229 containerd[1512]: time="2025-09-12T00:00:44.308171892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:44.309062 containerd[1512]: time="2025-09-12T00:00:44.309030338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 00:00:44.311213 containerd[1512]: time="2025-09-12T00:00:44.311158711Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:44.313392 containerd[1512]: time="2025-09-12T00:00:44.313084404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:44.313872 containerd[1512]: time="2025-09-12T00:00:44.313838328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 4.504050586s" Sep 12 00:00:44.313923 containerd[1512]: time="2025-09-12T00:00:44.313873089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 00:00:44.314997 containerd[1512]: time="2025-09-12T00:00:44.314974336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:00:44.318049 containerd[1512]: time="2025-09-12T00:00:44.318018075Z" level=info msg="CreateContainer within sandbox \"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 00:00:44.333144 containerd[1512]: time="2025-09-12T00:00:44.333069451Z" level=info msg="Container d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:44.334681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532223350.mount: Deactivated successfully. Sep 12 00:00:44.340056 containerd[1512]: time="2025-09-12T00:00:44.340026376Z" level=info msg="CreateContainer within sandbox \"ac91041afaed0a58944d6f012614c153653a09af750cbc1dc8ad61679be93f85\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0\"" Sep 12 00:00:44.340828 containerd[1512]: time="2025-09-12T00:00:44.340728940Z" level=info msg="StartContainer for \"d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0\"" Sep 12 00:00:44.342445 containerd[1512]: time="2025-09-12T00:00:44.341924428Z" level=info msg="connecting to shim d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0" address="unix:///run/containerd/s/b0cb58d9b8a2064fb27dd39d1c2b2ece9fba662227cbae50b0b710d8624da225" protocol=ttrpc version=3 Sep 12 00:00:44.365918 systemd[1]: Started cri-containerd-d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0.scope - libcontainer container d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0. Sep 12 00:00:44.400986 containerd[1512]: time="2025-09-12T00:00:44.400949206Z" level=info msg="StartContainer for \"d65af12d2cb8905f2e247e5f6b45594cd896d5f9e820ce7b30704a2bbd7900f0\" returns successfully" Sep 12 00:00:44.607356 kubelet[2630]: I0912 00:00:44.606663 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c4bf47dcc-j8lvh" podStartSLOduration=1.581001248 podStartE2EDuration="27.606649402s" podCreationTimestamp="2025-09-12 00:00:17 +0000 UTC" firstStartedPulling="2025-09-12 00:00:18.289198501 +0000 UTC m=+45.068599336" lastFinishedPulling="2025-09-12 00:00:44.314846655 +0000 UTC m=+71.094247490" observedRunningTime="2025-09-12 00:00:44.606336119 +0000 UTC m=+71.385736954" watchObservedRunningTime="2025-09-12 00:00:44.606649402 +0000 UTC m=+71.386050237" Sep 12 00:00:47.327797 containerd[1512]: time="2025-09-12T00:00:47.327505553Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:47.328717 containerd[1512]: time="2025-09-12T00:00:47.328035876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 00:00:47.330077 containerd[1512]: time="2025-09-12T00:00:47.330016768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.015012752s" Sep 12 00:00:47.330077 containerd[1512]: time="2025-09-12T00:00:47.330077368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 00:00:47.330825 containerd[1512]: time="2025-09-12T00:00:47.330804412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 00:00:47.334257 containerd[1512]: time="2025-09-12T00:00:47.334083832Z" level=info msg="CreateContainer within sandbox \"4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:00:47.341631 containerd[1512]: time="2025-09-12T00:00:47.340907752Z" level=info msg="Container 4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:47.348976 containerd[1512]: time="2025-09-12T00:00:47.348942159Z" level=info msg="CreateContainer within sandbox \"4b16bc3ed58131b16569c73e3f7293e01753a506a06721bdb565e7a272f4c77c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0\"" Sep 12 00:00:47.349597 containerd[1512]: time="2025-09-12T00:00:47.349549803Z" level=info msg="StartContainer for \"4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0\"" Sep 12 00:00:47.355980 containerd[1512]: time="2025-09-12T00:00:47.355599958Z" level=info msg="connecting to shim 4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0" address="unix:///run/containerd/s/ba2669f38282f96162a7aba4c50682ca632e678e9d3ef21cc68e679b77009da9" protocol=ttrpc version=3 Sep 12 00:00:47.377883 systemd[1]: Started cri-containerd-4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0.scope - libcontainer container 4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0. Sep 12 00:00:47.383515 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:36092.service - OpenSSH per-connection server daemon (10.0.0.1:36092). Sep 12 00:00:47.419091 containerd[1512]: time="2025-09-12T00:00:47.419041771Z" level=info msg="StartContainer for \"4b3837269e631026b993817f6be376cfa74af976cd5c826c13a8776c5a687dd0\" returns successfully" Sep 12 00:00:47.451669 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 36092 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:47.454926 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:47.459622 systemd-logind[1488]: New session 16 of user core. Sep 12 00:00:47.465888 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 00:00:47.594446 containerd[1512]: time="2025-09-12T00:00:47.594338923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a8d1139e9e9887639c3850eb7e45cd15307e2a046b7bd35a5b7fbbafc9668c1\" id:\"a6c99c22737f17b1b6279c087ec242b3a556ceed9caf4715301acf13317ca754\" pid:5281 exited_at:{seconds:1757635247 nanos:594050841}" Sep 12 00:00:47.633391 kubelet[2630]: I0912 00:00:47.632802 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948cb9647-j6lg2" podStartSLOduration=38.897660782 podStartE2EDuration="57.632784789s" podCreationTimestamp="2025-09-11 23:59:50 +0000 UTC" firstStartedPulling="2025-09-12 00:00:28.595575045 +0000 UTC m=+55.374975880" lastFinishedPulling="2025-09-12 00:00:47.330699052 +0000 UTC m=+74.110099887" observedRunningTime="2025-09-12 00:00:47.631050338 +0000 UTC m=+74.410451173" watchObservedRunningTime="2025-09-12 00:00:47.632784789 +0000 UTC m=+74.412185584" Sep 12 00:00:47.667877 sshd[5261]: Connection closed by 10.0.0.1 port 36092 Sep 12 00:00:47.667148 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:47.672182 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Sep 12 00:00:47.673325 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:36092.service: Deactivated successfully. Sep 12 00:00:47.676022 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 00:00:47.680467 systemd-logind[1488]: Removed session 16. Sep 12 00:00:48.612775 kubelet[2630]: I0912 00:00:48.612399 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:00:49.505298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521832883.mount: Deactivated successfully. Sep 12 00:00:51.895773 containerd[1512]: time="2025-09-12T00:00:51.895709448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:51.896894 containerd[1512]: time="2025-09-12T00:00:51.896869735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 00:00:51.897604 containerd[1512]: time="2025-09-12T00:00:51.897580978Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:51.899829 containerd[1512]: time="2025-09-12T00:00:51.899804590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:51.900708 containerd[1512]: time="2025-09-12T00:00:51.900680555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.569838903s" Sep 12 00:00:51.900708 containerd[1512]: time="2025-09-12T00:00:51.900711275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 00:00:51.901765 containerd[1512]: time="2025-09-12T00:00:51.901662520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 00:00:51.904923 containerd[1512]: time="2025-09-12T00:00:51.904189013Z" level=info msg="CreateContainer within sandbox \"eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 00:00:51.912220 containerd[1512]: time="2025-09-12T00:00:51.912188415Z" level=info msg="Container aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:51.918589 containerd[1512]: time="2025-09-12T00:00:51.918556169Z" level=info msg="CreateContainer within sandbox \"eb3a4c7aa2ea0e1140123f4dbb26fa4fe1666952acd9d03fd70d115b846b4afe\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945\"" Sep 12 00:00:51.919161 containerd[1512]: time="2025-09-12T00:00:51.919136012Z" level=info msg="StartContainer for \"aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945\"" Sep 12 00:00:51.921593 containerd[1512]: time="2025-09-12T00:00:51.921568385Z" level=info msg="connecting to shim aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945" address="unix:///run/containerd/s/1c8a6b998ae596aef00380cb55039ab29f3abbed8edb5e02ffa68033e0c5798c" protocol=ttrpc version=3 Sep 12 00:00:51.940908 systemd[1]: Started cri-containerd-aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945.scope - libcontainer container aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945. Sep 12 00:00:51.992779 containerd[1512]: time="2025-09-12T00:00:51.992577719Z" level=info msg="StartContainer for \"aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945\" returns successfully" Sep 12 00:00:52.637947 kubelet[2630]: I0912 00:00:52.637797 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-g56ln" podStartSLOduration=36.420569533 podStartE2EDuration="59.637782507s" podCreationTimestamp="2025-09-11 23:59:53 +0000 UTC" firstStartedPulling="2025-09-12 00:00:28.684300545 +0000 UTC m=+55.463701340" lastFinishedPulling="2025-09-12 00:00:51.901513519 +0000 UTC m=+78.680914314" observedRunningTime="2025-09-12 00:00:52.637626866 +0000 UTC m=+79.417027701" watchObservedRunningTime="2025-09-12 00:00:52.637782507 +0000 UTC m=+79.417183342" Sep 12 00:00:52.681838 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:50738.service - OpenSSH per-connection server daemon (10.0.0.1:50738). Sep 12 00:00:52.741319 containerd[1512]: time="2025-09-12T00:00:52.741272438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945\" id:\"cca5c55cab5b2ea9d790de9ffac5cfdaf76367ec90a29e13d2b1c897fac2cd93\" pid:5365 exit_status:1 exited_at:{seconds:1757635252 nanos:740626994}" Sep 12 00:00:52.758361 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 50738 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:52.761446 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:52.769088 systemd-logind[1488]: New session 17 of user core. Sep 12 00:00:52.775918 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 00:00:52.983579 sshd[5380]: Connection closed by 10.0.0.1 port 50738 Sep 12 00:00:52.984857 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:52.994388 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:50738.service: Deactivated successfully. Sep 12 00:00:52.999155 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 00:00:53.002968 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Sep 12 00:00:53.009194 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:50740.service - OpenSSH per-connection server daemon (10.0.0.1:50740). Sep 12 00:00:53.010980 systemd-logind[1488]: Removed session 17. Sep 12 00:00:53.075956 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 50740 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:53.076742 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:53.081560 systemd-logind[1488]: New session 18 of user core. Sep 12 00:00:53.092921 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 00:00:53.302172 sshd[5396]: Connection closed by 10.0.0.1 port 50740 Sep 12 00:00:53.302824 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:53.315979 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:50740.service: Deactivated successfully. Sep 12 00:00:53.319207 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 00:00:53.320090 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Sep 12 00:00:53.323686 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:50750.service - OpenSSH per-connection server daemon (10.0.0.1:50750). Sep 12 00:00:53.324770 systemd-logind[1488]: Removed session 18. Sep 12 00:00:53.384618 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 50750 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:53.385965 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:53.389795 systemd-logind[1488]: New session 19 of user core. Sep 12 00:00:53.398903 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 00:00:53.693686 containerd[1512]: time="2025-09-12T00:00:53.693646425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aed7bf833a52644662c8c49095f90966bc8b849101dbf4a5d94e3db275908945\" id:\"f82401f56764d83eea46a229f24c44559e80a728d38f548a1233919528c2d75e\" pid:5428 exit_status:1 exited_at:{seconds:1757635253 nanos:693350103}" Sep 12 00:00:54.018137 sshd[5410]: Connection closed by 10.0.0.1 port 50750 Sep 12 00:00:54.018425 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:54.028066 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:50750.service: Deactivated successfully. Sep 12 00:00:54.031381 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 00:00:54.032578 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Sep 12 00:00:54.038016 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:50758.service - OpenSSH per-connection server daemon (10.0.0.1:50758). Sep 12 00:00:54.039788 systemd-logind[1488]: Removed session 19. Sep 12 00:00:54.097506 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 50758 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:54.100337 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:54.108106 systemd-logind[1488]: New session 20 of user core. Sep 12 00:00:54.118937 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 00:00:54.321678 kubelet[2630]: E0912 00:00:54.321304 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:00:54.494113 sshd[5454]: Connection closed by 10.0.0.1 port 50758 Sep 12 00:00:54.495959 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:54.508702 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:50758.service: Deactivated successfully. Sep 12 00:00:54.511313 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 00:00:54.514447 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Sep 12 00:00:54.518184 systemd-logind[1488]: Removed session 20. Sep 12 00:00:54.521522 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:50774.service - OpenSSH per-connection server daemon (10.0.0.1:50774). Sep 12 00:00:54.578239 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 50774 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:54.579545 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:54.586080 systemd-logind[1488]: New session 21 of user core. Sep 12 00:00:54.597028 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 00:00:54.775112 sshd[5467]: Connection closed by 10.0.0.1 port 50774 Sep 12 00:00:54.775698 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Sep 12 00:00:54.780402 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:50774.service: Deactivated successfully. Sep 12 00:00:54.782538 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 00:00:54.784613 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. Sep 12 00:00:54.786432 systemd-logind[1488]: Removed session 21. Sep 12 00:00:54.969608 kubelet[2630]: I0912 00:00:54.969515 2630 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:00:56.159234 containerd[1512]: time="2025-09-12T00:00:56.159152748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:56.160062 containerd[1512]: time="2025-09-12T00:00:56.159992472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 00:00:56.161257 containerd[1512]: time="2025-09-12T00:00:56.160992437Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:56.163141 containerd[1512]: time="2025-09-12T00:00:56.163093246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:56.163811 containerd[1512]: time="2025-09-12T00:00:56.163788929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.262093609s" Sep 12 00:00:56.163872 containerd[1512]: time="2025-09-12T00:00:56.163815410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 00:00:56.165092 containerd[1512]: time="2025-09-12T00:00:56.165050935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 00:00:56.173178 containerd[1512]: time="2025-09-12T00:00:56.172947532Z" level=info msg="CreateContainer within sandbox \"06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 00:00:56.183935 containerd[1512]: time="2025-09-12T00:00:56.183158699Z" level=info msg="Container a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:56.191149 containerd[1512]: time="2025-09-12T00:00:56.191106855Z" level=info msg="CreateContainer within sandbox \"06ffe8ba685517e540cbc5ddb9aaec4ca933368f9c5328a1727be6891e838b5e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2\"" Sep 12 00:00:56.191806 containerd[1512]: time="2025-09-12T00:00:56.191783538Z" level=info msg="StartContainer for \"a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2\"" Sep 12 00:00:56.192876 containerd[1512]: time="2025-09-12T00:00:56.192853503Z" level=info msg="connecting to shim a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2" address="unix:///run/containerd/s/72e87ebaa10240736c6a7c8f7a9b8e7e918891703cefa3195a1b40d8123d81a5" protocol=ttrpc version=3 Sep 12 00:00:56.221951 systemd[1]: Started cri-containerd-a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2.scope - libcontainer container a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2. Sep 12 00:00:56.272924 containerd[1512]: time="2025-09-12T00:00:56.272859471Z" level=info msg="StartContainer for \"a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2\" returns successfully" Sep 12 00:00:56.648207 kubelet[2630]: I0912 00:00:56.647160 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669f89fcc5-8lg6p" podStartSLOduration=35.249157064 podStartE2EDuration="1m2.647143994s" podCreationTimestamp="2025-09-11 23:59:54 +0000 UTC" firstStartedPulling="2025-09-12 00:00:28.766548403 +0000 UTC m=+55.545949238" lastFinishedPulling="2025-09-12 00:00:56.164535373 +0000 UTC m=+82.943936168" observedRunningTime="2025-09-12 00:00:56.64635763 +0000 UTC m=+83.425758425" watchObservedRunningTime="2025-09-12 00:00:56.647143994 +0000 UTC m=+83.426544829" Sep 12 00:00:56.679701 containerd[1512]: time="2025-09-12T00:00:56.679658143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4432dbef4e3149c52e51219b763293c114af6f583d1c5cd810bca5155b119d2\" id:\"dcb0e1b87b26207f6f9fced17e21c671f9e220e2edbb7dc526cefdad75e1d391\" pid:5546 exited_at:{seconds:1757635256 nanos:677415293}" Sep 12 00:00:57.225610 containerd[1512]: time="2025-09-12T00:00:57.225569229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:57.226117 containerd[1512]: time="2025-09-12T00:00:57.226078231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 00:00:57.235019 containerd[1512]: time="2025-09-12T00:00:57.234989951Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:57.237500 containerd[1512]: time="2025-09-12T00:00:57.237456802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:00:57.238347 containerd[1512]: time="2025-09-12T00:00:57.237995404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.072797028s" Sep 12 00:00:57.238347 containerd[1512]: time="2025-09-12T00:00:57.238024444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 00:00:57.242798 containerd[1512]: time="2025-09-12T00:00:57.242729945Z" level=info msg="CreateContainer within sandbox \"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 00:00:57.250633 containerd[1512]: time="2025-09-12T00:00:57.250037618Z" level=info msg="Container ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:00:57.267203 containerd[1512]: time="2025-09-12T00:00:57.267167175Z" level=info msg="CreateContainer within sandbox \"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a\"" Sep 12 00:00:57.267765 containerd[1512]: time="2025-09-12T00:00:57.267648377Z" level=info msg="StartContainer for \"ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a\"" Sep 12 00:00:57.270214 containerd[1512]: time="2025-09-12T00:00:57.270180348Z" level=info msg="connecting to shim ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a" address="unix:///run/containerd/s/f7ce406a5233a6bf55b52f3707c55543d8e8b9bdfae832f7d524818186892721" protocol=ttrpc version=3 Sep 12 00:00:57.298964 systemd[1]: Started cri-containerd-ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a.scope - libcontainer container ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a. Sep 12 00:00:57.382517 containerd[1512]: time="2025-09-12T00:00:57.382479732Z" level=info msg="StartContainer for \"ff9134cea97a42ba75dc00499a8329dfccd571c025851a5a669cc20b2c4f790a\" returns successfully" Sep 12 00:00:57.383521 containerd[1512]: time="2025-09-12T00:00:57.383497896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 00:00:59.793064 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:50784.service - OpenSSH per-connection server daemon (10.0.0.1:50784). Sep 12 00:00:59.860406 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 50784 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:00:59.863390 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:00:59.867813 systemd-logind[1488]: New session 22 of user core. Sep 12 00:00:59.878954 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 00:01:00.048882 sshd[5603]: Connection closed by 10.0.0.1 port 50784 Sep 12 00:01:00.049785 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Sep 12 00:01:00.053885 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:50784.service: Deactivated successfully. Sep 12 00:01:00.057020 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 00:01:00.060004 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. Sep 12 00:01:00.061135 systemd-logind[1488]: Removed session 22. Sep 12 00:01:03.383856 containerd[1512]: time="2025-09-12T00:01:03.383804768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:01:03.384781 containerd[1512]: time="2025-09-12T00:01:03.384646131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 00:01:03.387327 containerd[1512]: time="2025-09-12T00:01:03.385919936Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:01:03.389822 containerd[1512]: time="2025-09-12T00:01:03.389793511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:01:03.390506 containerd[1512]: time="2025-09-12T00:01:03.390476474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 6.006949658s" Sep 12 00:01:03.390688 containerd[1512]: time="2025-09-12T00:01:03.390614474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 00:01:03.395175 containerd[1512]: time="2025-09-12T00:01:03.395143492Z" level=info msg="CreateContainer within sandbox \"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 00:01:03.406053 containerd[1512]: time="2025-09-12T00:01:03.405903333Z" level=info msg="Container 7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:01:03.413517 containerd[1512]: time="2025-09-12T00:01:03.413475522Z" level=info msg="CreateContainer within sandbox \"938b8e6e1402201184892164ed305d96bd947431ccc398931631b8b1899e52ca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720\"" Sep 12 00:01:03.414223 containerd[1512]: time="2025-09-12T00:01:03.414189525Z" level=info msg="StartContainer for \"7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720\"" Sep 12 00:01:03.415509 containerd[1512]: time="2025-09-12T00:01:03.415481370Z" level=info msg="connecting to shim 7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720" address="unix:///run/containerd/s/f7ce406a5233a6bf55b52f3707c55543d8e8b9bdfae832f7d524818186892721" protocol=ttrpc version=3 Sep 12 00:01:03.456928 systemd[1]: Started cri-containerd-7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720.scope - libcontainer container 7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720. Sep 12 00:01:03.503576 containerd[1512]: time="2025-09-12T00:01:03.503536427Z" level=info msg="StartContainer for \"7870367b72d385138674763e25e6f8c421a00e4175da39cfc98560c8eab4f720\" returns successfully" Sep 12 00:01:03.682020 kubelet[2630]: I0912 00:01:03.681874 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fv47h" podStartSLOduration=35.79895411 podStartE2EDuration="1m9.68185587s" podCreationTimestamp="2025-09-11 23:59:54 +0000 UTC" firstStartedPulling="2025-09-12 00:00:29.508761318 +0000 UTC m=+56.288162153" lastFinishedPulling="2025-09-12 00:01:03.391663078 +0000 UTC m=+90.171063913" observedRunningTime="2025-09-12 00:01:03.679609941 +0000 UTC m=+90.459010776" watchObservedRunningTime="2025-09-12 00:01:03.68185587 +0000 UTC m=+90.461256705" Sep 12 00:01:04.402370 kubelet[2630]: I0912 00:01:04.402324 2630 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 00:01:04.409228 kubelet[2630]: I0912 00:01:04.409204 2630 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 00:01:05.063803 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Sep 12 00:01:05.142767 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:01:05.144010 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:01:05.154850 systemd-logind[1488]: New session 23 of user core. Sep 12 00:01:05.165394 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 00:01:05.355848 sshd[5657]: Connection closed by 10.0.0.1 port 50338 Sep 12 00:01:05.356499 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Sep 12 00:01:05.360900 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:50338.service: Deactivated successfully. Sep 12 00:01:05.362743 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 00:01:05.366063 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. Sep 12 00:01:05.367045 systemd-logind[1488]: Removed session 23. Sep 12 00:01:08.321968 kubelet[2630]: E0912 00:01:08.321919 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:01:09.321996 kubelet[2630]: E0912 00:01:09.321928 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:01:10.374033 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:41050.service - OpenSSH per-connection server daemon (10.0.0.1:41050). Sep 12 00:01:10.431854 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 41050 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 12 00:01:10.433559 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:01:10.439528 systemd-logind[1488]: New session 24 of user core. Sep 12 00:01:10.449982 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 00:01:10.645955 sshd[5674]: Connection closed by 10.0.0.1 port 41050 Sep 12 00:01:10.645559 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Sep 12 00:01:10.649472 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:41050.service: Deactivated successfully. Sep 12 00:01:10.651140 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 00:01:10.652612 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. Sep 12 00:01:10.653887 systemd-logind[1488]: Removed session 24.