Sep 9 04:54:08.760201 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 04:54:08.760224 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 04:54:08.760233 kernel: KASLR enabled Sep 9 04:54:08.760239 kernel: efi: EFI v2.7 by EDK II Sep 9 04:54:08.760245 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 04:54:08.760250 kernel: random: crng init done Sep 9 04:54:08.760257 kernel: secureboot: Secure boot disabled Sep 9 04:54:08.760263 kernel: ACPI: Early table checksum verification disabled Sep 9 04:54:08.760268 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 04:54:08.760291 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 04:54:08.760297 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760303 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760309 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760315 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760322 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760330 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760336 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760342 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760348 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 04:54:08.760354 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 04:54:08.760361 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 04:54:08.760367 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 04:54:08.760373 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 04:54:08.760378 kernel: Zone ranges: Sep 9 04:54:08.760385 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 04:54:08.760392 kernel: DMA32 empty Sep 9 04:54:08.760398 kernel: Normal empty Sep 9 04:54:08.760404 kernel: Device empty Sep 9 04:54:08.760410 kernel: Movable zone start for each node Sep 9 04:54:08.760416 kernel: Early memory node ranges Sep 9 04:54:08.760422 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 04:54:08.760429 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 04:54:08.760435 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 04:54:08.760441 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 04:54:08.760447 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 04:54:08.760453 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 04:54:08.760459 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 04:54:08.760467 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 04:54:08.760473 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 04:54:08.760479 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 04:54:08.760488 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 04:54:08.760494 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 04:54:08.760501 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 04:54:08.760508 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 04:54:08.760514 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 04:54:08.760521 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 04:54:08.760527 kernel: psci: probing for conduit method from ACPI. Sep 9 04:54:08.760533 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 04:54:08.760539 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 04:54:08.760546 kernel: psci: Trusted OS migration not required Sep 9 04:54:08.760552 kernel: psci: SMC Calling Convention v1.1 Sep 9 04:54:08.760558 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 04:54:08.760564 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 04:54:08.760572 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 04:54:08.760578 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 04:54:08.760594 kernel: Detected PIPT I-cache on CPU0 Sep 9 04:54:08.760601 kernel: CPU features: detected: GIC system register CPU interface Sep 9 04:54:08.760607 kernel: CPU features: detected: Spectre-v4 Sep 9 04:54:08.760613 kernel: CPU features: detected: Spectre-BHB Sep 9 04:54:08.760620 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 04:54:08.760626 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 04:54:08.760632 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 04:54:08.760639 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 04:54:08.760645 kernel: alternatives: applying boot alternatives Sep 9 04:54:08.760652 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:54:08.760661 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 04:54:08.760668 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 04:54:08.760674 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 04:54:08.760680 kernel: Fallback order for Node 0: 0 Sep 9 04:54:08.760695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 04:54:08.760701 kernel: Policy zone: DMA Sep 9 04:54:08.760708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 04:54:08.760714 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 04:54:08.760720 kernel: software IO TLB: area num 4. Sep 9 04:54:08.760727 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 04:54:08.760733 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 04:54:08.760742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 04:54:08.760748 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 04:54:08.760755 kernel: rcu: RCU event tracing is enabled. Sep 9 04:54:08.760762 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 04:54:08.760769 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 04:54:08.760775 kernel: Tracing variant of Tasks RCU enabled. Sep 9 04:54:08.760782 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 04:54:08.760789 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 04:54:08.760795 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 04:54:08.760802 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 04:54:08.760808 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 04:54:08.760815 kernel: GICv3: 256 SPIs implemented Sep 9 04:54:08.760822 kernel: GICv3: 0 Extended SPIs implemented Sep 9 04:54:08.760828 kernel: Root IRQ handler: gic_handle_irq Sep 9 04:54:08.760835 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 04:54:08.760841 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 04:54:08.760847 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 04:54:08.760853 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 04:54:08.760860 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 04:54:08.760866 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 04:54:08.760873 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 04:54:08.760879 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 04:54:08.760886 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 04:54:08.760893 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:54:08.760900 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 04:54:08.760907 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 04:54:08.760913 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 04:54:08.760920 kernel: arm-pv: using stolen time PV Sep 9 04:54:08.760927 kernel: Console: colour dummy device 80x25 Sep 9 04:54:08.760933 kernel: ACPI: Core revision 20240827 Sep 9 04:54:08.760940 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 04:54:08.760947 kernel: pid_max: default: 32768 minimum: 301 Sep 9 04:54:08.760953 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 04:54:08.760961 kernel: landlock: Up and running. Sep 9 04:54:08.760968 kernel: SELinux: Initializing. Sep 9 04:54:08.760974 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:54:08.760981 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:54:08.760988 kernel: rcu: Hierarchical SRCU implementation. Sep 9 04:54:08.760995 kernel: rcu: Max phase no-delay instances is 400. Sep 9 04:54:08.761001 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 04:54:08.761008 kernel: Remapping and enabling EFI services. Sep 9 04:54:08.761015 kernel: smp: Bringing up secondary CPUs ... Sep 9 04:54:08.761027 kernel: Detected PIPT I-cache on CPU1 Sep 9 04:54:08.761034 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 04:54:08.761041 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 04:54:08.761049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:54:08.761056 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 04:54:08.761063 kernel: Detected PIPT I-cache on CPU2 Sep 9 04:54:08.761071 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 04:54:08.761078 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 04:54:08.761086 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:54:08.761093 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 04:54:08.761100 kernel: Detected PIPT I-cache on CPU3 Sep 9 04:54:08.761106 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 04:54:08.761114 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 04:54:08.761120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 04:54:08.761127 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 04:54:08.761134 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 04:54:08.761141 kernel: SMP: Total of 4 processors activated. Sep 9 04:54:08.761149 kernel: CPU: All CPU(s) started at EL1 Sep 9 04:54:08.761156 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 04:54:08.761163 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 04:54:08.761170 kernel: CPU features: detected: Common not Private translations Sep 9 04:54:08.761177 kernel: CPU features: detected: CRC32 instructions Sep 9 04:54:08.761184 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 04:54:08.761190 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 04:54:08.761198 kernel: CPU features: detected: LSE atomic instructions Sep 9 04:54:08.761205 kernel: CPU features: detected: Privileged Access Never Sep 9 04:54:08.761213 kernel: CPU features: detected: RAS Extension Support Sep 9 04:54:08.761220 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 04:54:08.761228 kernel: alternatives: applying system-wide alternatives Sep 9 04:54:08.761234 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 04:54:08.761242 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 04:54:08.761249 kernel: devtmpfs: initialized Sep 9 04:54:08.761256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 04:54:08.761263 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 04:54:08.761271 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 04:54:08.761279 kernel: 0 pages in range for non-PLT usage Sep 9 04:54:08.761286 kernel: 508560 pages in range for PLT usage Sep 9 04:54:08.761293 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 04:54:08.761300 kernel: SMBIOS 3.0.0 present. Sep 9 04:54:08.761307 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 04:54:08.761314 kernel: DMI: Memory slots populated: 1/1 Sep 9 04:54:08.761321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 04:54:08.761328 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 04:54:08.761336 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 04:54:08.761344 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 04:54:08.761351 kernel: audit: initializing netlink subsys (disabled) Sep 9 04:54:08.761359 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 9 04:54:08.761366 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 04:54:08.761373 kernel: cpuidle: using governor menu Sep 9 04:54:08.761380 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 04:54:08.761387 kernel: ASID allocator initialised with 32768 entries Sep 9 04:54:08.761394 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 04:54:08.761401 kernel: Serial: AMBA PL011 UART driver Sep 9 04:54:08.761409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 04:54:08.761416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 04:54:08.761424 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 04:54:08.761431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 04:54:08.761438 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 04:54:08.761445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 04:54:08.761452 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 04:54:08.761462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 04:54:08.761476 kernel: ACPI: Added _OSI(Module Device) Sep 9 04:54:08.761485 kernel: ACPI: Added _OSI(Processor Device) Sep 9 04:54:08.761492 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 04:54:08.761499 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 04:54:08.761505 kernel: ACPI: Interpreter enabled Sep 9 04:54:08.761513 kernel: ACPI: Using GIC for interrupt routing Sep 9 04:54:08.761520 kernel: ACPI: MCFG table detected, 1 entries Sep 9 04:54:08.761527 kernel: ACPI: CPU0 has been hot-added Sep 9 04:54:08.761534 kernel: ACPI: CPU1 has been hot-added Sep 9 04:54:08.761541 kernel: ACPI: CPU2 has been hot-added Sep 9 04:54:08.761547 kernel: ACPI: CPU3 has been hot-added Sep 9 04:54:08.761555 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 04:54:08.761562 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 04:54:08.761571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 04:54:08.761763 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 04:54:08.761834 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 04:54:08.761893 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 04:54:08.761950 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 04:54:08.762008 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 04:54:08.762017 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 04:54:08.762024 kernel: PCI host bridge to bus 0000:00 Sep 9 04:54:08.762086 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 04:54:08.762138 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 04:54:08.762190 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 04:54:08.762240 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 04:54:08.762326 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 04:54:08.762398 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 04:54:08.762459 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 04:54:08.762516 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 04:54:08.762572 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 04:54:08.762641 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 04:54:08.762722 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 04:54:08.762787 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 04:54:08.762840 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 04:54:08.762907 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 04:54:08.762958 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 04:54:08.762967 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 04:54:08.762975 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 04:54:08.762982 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 04:54:08.762991 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 04:54:08.762998 kernel: iommu: Default domain type: Translated Sep 9 04:54:08.763005 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 04:54:08.763012 kernel: efivars: Registered efivars operations Sep 9 04:54:08.763018 kernel: vgaarb: loaded Sep 9 04:54:08.763025 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 04:54:08.763032 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 04:54:08.763039 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 04:54:08.763046 kernel: pnp: PnP ACPI init Sep 9 04:54:08.763117 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 04:54:08.763127 kernel: pnp: PnP ACPI: found 1 devices Sep 9 04:54:08.763135 kernel: NET: Registered PF_INET protocol family Sep 9 04:54:08.763142 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 04:54:08.763149 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 04:54:08.763156 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 04:54:08.763163 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 04:54:08.763170 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 04:54:08.763179 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 04:54:08.763186 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:54:08.763193 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:54:08.763201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 04:54:08.763208 kernel: PCI: CLS 0 bytes, default 64 Sep 9 04:54:08.763215 kernel: kvm [1]: HYP mode not available Sep 9 04:54:08.763222 kernel: Initialise system trusted keyrings Sep 9 04:54:08.763229 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 04:54:08.763236 kernel: Key type asymmetric registered Sep 9 04:54:08.763244 kernel: Asymmetric key parser 'x509' registered Sep 9 04:54:08.763252 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 04:54:08.763259 kernel: io scheduler mq-deadline registered Sep 9 04:54:08.763266 kernel: io scheduler kyber registered Sep 9 04:54:08.763273 kernel: io scheduler bfq registered Sep 9 04:54:08.763280 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 04:54:08.763287 kernel: ACPI: button: Power Button [PWRB] Sep 9 04:54:08.763294 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 04:54:08.763354 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 04:54:08.763365 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 04:54:08.763372 kernel: thunder_xcv, ver 1.0 Sep 9 04:54:08.763379 kernel: thunder_bgx, ver 1.0 Sep 9 04:54:08.763386 kernel: nicpf, ver 1.0 Sep 9 04:54:08.763393 kernel: nicvf, ver 1.0 Sep 9 04:54:08.763473 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 04:54:08.763529 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T04:54:08 UTC (1757393648) Sep 9 04:54:08.763538 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 04:54:08.763549 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 04:54:08.763556 kernel: watchdog: NMI not fully supported Sep 9 04:54:08.763562 kernel: watchdog: Hard watchdog permanently disabled Sep 9 04:54:08.763569 kernel: NET: Registered PF_INET6 protocol family Sep 9 04:54:08.763576 kernel: Segment Routing with IPv6 Sep 9 04:54:08.763589 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 04:54:08.763597 kernel: NET: Registered PF_PACKET protocol family Sep 9 04:54:08.763604 kernel: Key type dns_resolver registered Sep 9 04:54:08.763611 kernel: registered taskstats version 1 Sep 9 04:54:08.763617 kernel: Loading compiled-in X.509 certificates Sep 9 04:54:08.763626 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 04:54:08.763633 kernel: Demotion targets for Node 0: null Sep 9 04:54:08.763639 kernel: Key type .fscrypt registered Sep 9 04:54:08.763647 kernel: Key type fscrypt-provisioning registered Sep 9 04:54:08.763654 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 04:54:08.763661 kernel: ima: Allocated hash algorithm: sha1 Sep 9 04:54:08.763668 kernel: ima: No architecture policies found Sep 9 04:54:08.763675 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 04:54:08.763683 kernel: clk: Disabling unused clocks Sep 9 04:54:08.763698 kernel: PM: genpd: Disabling unused power domains Sep 9 04:54:08.763705 kernel: Warning: unable to open an initial console. Sep 9 04:54:08.763712 kernel: Freeing unused kernel memory: 38976K Sep 9 04:54:08.763719 kernel: Run /init as init process Sep 9 04:54:08.763726 kernel: with arguments: Sep 9 04:54:08.763733 kernel: /init Sep 9 04:54:08.763739 kernel: with environment: Sep 9 04:54:08.763746 kernel: HOME=/ Sep 9 04:54:08.763754 kernel: TERM=linux Sep 9 04:54:08.763761 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 04:54:08.763769 systemd[1]: Successfully made /usr/ read-only. Sep 9 04:54:08.763779 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:54:08.763787 systemd[1]: Detected virtualization kvm. Sep 9 04:54:08.763795 systemd[1]: Detected architecture arm64. Sep 9 04:54:08.763802 systemd[1]: Running in initrd. Sep 9 04:54:08.763809 systemd[1]: No hostname configured, using default hostname. Sep 9 04:54:08.763818 systemd[1]: Hostname set to . Sep 9 04:54:08.763826 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:54:08.763833 systemd[1]: Queued start job for default target initrd.target. Sep 9 04:54:08.763841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:08.763849 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:08.763857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 04:54:08.763864 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:54:08.763872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 04:54:08.763882 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 04:54:08.763890 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 04:54:08.763898 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 04:54:08.763905 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:08.763913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:08.763920 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:54:08.763929 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:54:08.763936 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:54:08.763944 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:54:08.763951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:54:08.763958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:54:08.763966 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 04:54:08.763973 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 04:54:08.763981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:08.763988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:08.763997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:08.764005 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:54:08.764012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 04:54:08.764020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:54:08.764027 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 04:54:08.764035 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 04:54:08.764042 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 04:54:08.764050 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:54:08.764057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:54:08.764066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:08.764073 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 04:54:08.764081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:08.764088 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 04:54:08.764114 systemd-journald[243]: Collecting audit messages is disabled. Sep 9 04:54:08.764133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 04:54:08.764142 systemd-journald[243]: Journal started Sep 9 04:54:08.764162 systemd-journald[243]: Runtime Journal (/run/log/journal/27c70f9417d0489e930af9a97ff3d5b6) is 6M, max 48.5M, 42.4M free. Sep 9 04:54:08.757020 systemd-modules-load[244]: Inserted module 'overlay' Sep 9 04:54:08.768239 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:54:08.769869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:08.772637 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 04:54:08.773175 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 9 04:54:08.773984 kernel: Bridge firewalling registered Sep 9 04:54:08.773992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:08.775061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:54:08.779196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:54:08.780825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:54:08.782732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:54:08.789349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:54:08.797639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:08.798972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:08.801819 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 04:54:08.804579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:08.806013 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:54:08.808545 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 04:54:08.810712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:54:08.832145 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:54:08.846821 systemd-resolved[291]: Positive Trust Anchors: Sep 9 04:54:08.846838 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:54:08.846869 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:54:08.851829 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 9 04:54:08.852927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:54:08.855732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:08.909715 kernel: SCSI subsystem initialized Sep 9 04:54:08.913703 kernel: Loading iSCSI transport class v2.0-870. Sep 9 04:54:08.921718 kernel: iscsi: registered transport (tcp) Sep 9 04:54:08.934709 kernel: iscsi: registered transport (qla4xxx) Sep 9 04:54:08.934728 kernel: QLogic iSCSI HBA Driver Sep 9 04:54:08.950983 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:54:08.964068 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:08.966087 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:54:09.010462 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 04:54:09.012662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 04:54:09.072714 kernel: raid6: neonx8 gen() 15744 MB/s Sep 9 04:54:09.089710 kernel: raid6: neonx4 gen() 15792 MB/s Sep 9 04:54:09.108712 kernel: raid6: neonx2 gen() 13180 MB/s Sep 9 04:54:09.125709 kernel: raid6: neonx1 gen() 8343 MB/s Sep 9 04:54:09.142715 kernel: raid6: int64x8 gen() 6877 MB/s Sep 9 04:54:09.159703 kernel: raid6: int64x4 gen() 7350 MB/s Sep 9 04:54:09.176713 kernel: raid6: int64x2 gen() 6106 MB/s Sep 9 04:54:09.193734 kernel: raid6: int64x1 gen() 4750 MB/s Sep 9 04:54:09.193775 kernel: raid6: using algorithm neonx4 gen() 15792 MB/s Sep 9 04:54:09.210727 kernel: raid6: .... xor() 12330 MB/s, rmw enabled Sep 9 04:54:09.210758 kernel: raid6: using neon recovery algorithm Sep 9 04:54:09.215755 kernel: xor: measuring software checksum speed Sep 9 04:54:09.215776 kernel: 8regs : 21573 MB/sec Sep 9 04:54:09.216809 kernel: 32regs : 21676 MB/sec Sep 9 04:54:09.216821 kernel: arm64_neon : 28089 MB/sec Sep 9 04:54:09.216831 kernel: xor: using function: arm64_neon (28089 MB/sec) Sep 9 04:54:09.268726 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 04:54:09.275763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:54:09.278239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:09.305790 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 9 04:54:09.309838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:09.311542 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 04:54:09.342096 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 9 04:54:09.364257 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:54:09.366913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:54:09.418960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:09.421940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 04:54:09.474713 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 04:54:09.474871 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 04:54:09.478183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:09.478305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:09.484328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 04:54:09.484349 kernel: GPT:9289727 != 19775487 Sep 9 04:54:09.484358 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 04:54:09.484367 kernel: GPT:9289727 != 19775487 Sep 9 04:54:09.484365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:09.488070 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 04:54:09.488090 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:54:09.488137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:09.512142 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 04:54:09.514279 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 04:54:09.515361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:09.530234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 04:54:09.537067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 04:54:09.538025 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 04:54:09.551906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 04:54:09.552893 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:54:09.554575 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:09.556427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:54:09.558949 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 04:54:09.560466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 04:54:09.584877 disk-uuid[593]: Primary Header is updated. Sep 9 04:54:09.584877 disk-uuid[593]: Secondary Entries is updated. Sep 9 04:54:09.584877 disk-uuid[593]: Secondary Header is updated. Sep 9 04:54:09.587655 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:54:09.589497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:54:10.597727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 04:54:10.598346 disk-uuid[598]: The operation has completed successfully. Sep 9 04:54:10.628447 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 04:54:10.628557 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 04:54:10.654519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 04:54:10.679785 sh[612]: Success Sep 9 04:54:10.691777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 04:54:10.691816 kernel: device-mapper: uevent: version 1.0.3 Sep 9 04:54:10.692709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 04:54:10.699709 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 04:54:10.722884 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 04:54:10.725540 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 04:54:10.737744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 04:54:10.744707 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Sep 9 04:54:10.746719 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 04:54:10.746766 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:10.750984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 04:54:10.751017 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 04:54:10.752118 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 04:54:10.753315 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:54:10.754755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 04:54:10.755512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 04:54:10.758403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 04:54:10.783459 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 9 04:54:10.783514 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:10.783524 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:10.787827 kernel: BTRFS info (device vda6): turning on async discard Sep 9 04:54:10.787869 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 04:54:10.792700 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:10.794060 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 04:54:10.795950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 04:54:10.866586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:54:10.869201 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:54:10.898929 ignition[702]: Ignition 2.22.0 Sep 9 04:54:10.898945 ignition[702]: Stage: fetch-offline Sep 9 04:54:10.898978 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:10.898985 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:10.899066 ignition[702]: parsed url from cmdline: "" Sep 9 04:54:10.899069 ignition[702]: no config URL provided Sep 9 04:54:10.899073 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:54:10.899080 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:54:10.899100 ignition[702]: op(1): [started] loading QEMU firmware config module Sep 9 04:54:10.899104 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 04:54:10.907313 ignition[702]: op(1): [finished] loading QEMU firmware config module Sep 9 04:54:10.908109 systemd-networkd[805]: lo: Link UP Sep 9 04:54:10.908113 systemd-networkd[805]: lo: Gained carrier Sep 9 04:54:10.908795 systemd-networkd[805]: Enumeration completed Sep 9 04:54:10.908916 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:54:10.909178 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:10.909181 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:54:10.909993 systemd-networkd[805]: eth0: Link UP Sep 9 04:54:10.910133 systemd-networkd[805]: eth0: Gained carrier Sep 9 04:54:10.910142 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:10.910219 systemd[1]: Reached target network.target - Network. Sep 9 04:54:10.927778 systemd-networkd[805]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 04:54:10.960950 ignition[702]: parsing config with SHA512: 484efa1451dca15ec8bc1b0eab5490434236e4134d6d5f9001beebc522489c5bd613aecae463941e4057196466778e445c1c94417a19108fa38a9a99245c35be Sep 9 04:54:10.965406 unknown[702]: fetched base config from "system" Sep 9 04:54:10.965423 unknown[702]: fetched user config from "qemu" Sep 9 04:54:10.966043 ignition[702]: fetch-offline: fetch-offline passed Sep 9 04:54:10.966126 ignition[702]: Ignition finished successfully Sep 9 04:54:10.968757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:54:10.969813 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 04:54:10.970501 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 04:54:11.003927 ignition[814]: Ignition 2.22.0 Sep 9 04:54:11.003943 ignition[814]: Stage: kargs Sep 9 04:54:11.004074 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:11.004082 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:11.004986 ignition[814]: kargs: kargs passed Sep 9 04:54:11.005038 ignition[814]: Ignition finished successfully Sep 9 04:54:11.010771 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 04:54:11.012487 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 04:54:11.039420 ignition[822]: Ignition 2.22.0 Sep 9 04:54:11.039438 ignition[822]: Stage: disks Sep 9 04:54:11.039568 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:11.039588 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:11.040368 ignition[822]: disks: disks passed Sep 9 04:54:11.042255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 04:54:11.040411 ignition[822]: Ignition finished successfully Sep 9 04:54:11.043523 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 04:54:11.044631 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 04:54:11.046404 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:54:11.047791 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:54:11.049357 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:54:11.051850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 04:54:11.081284 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 04:54:11.085573 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 04:54:11.088795 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 04:54:11.142706 kernel: EXT4-fs (vda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 04:54:11.143058 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 04:54:11.144161 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 04:54:11.146212 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:54:11.147702 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 04:54:11.148594 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 04:54:11.148636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 04:54:11.148658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:54:11.163204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 04:54:11.165484 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 04:54:11.169431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 9 04:54:11.169498 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:11.169528 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:11.171239 kernel: BTRFS info (device vda6): turning on async discard Sep 9 04:54:11.171266 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 04:54:11.172477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:54:11.198235 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 04:54:11.202259 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 9 04:54:11.206101 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 04:54:11.209559 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 04:54:11.273440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 04:54:11.275263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 04:54:11.276560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 04:54:11.294713 kernel: BTRFS info (device vda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:11.304769 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 04:54:11.317699 ignition[953]: INFO : Ignition 2.22.0 Sep 9 04:54:11.317699 ignition[953]: INFO : Stage: mount Sep 9 04:54:11.319825 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:11.319825 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:11.319825 ignition[953]: INFO : mount: mount passed Sep 9 04:54:11.319825 ignition[953]: INFO : Ignition finished successfully Sep 9 04:54:11.320921 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 04:54:11.323444 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 04:54:11.752968 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 04:54:11.754511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:54:11.782702 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 9 04:54:11.784395 kernel: BTRFS info (device vda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:11.784420 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:11.786708 kernel: BTRFS info (device vda6): turning on async discard Sep 9 04:54:11.786735 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 04:54:11.787983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:54:11.822011 ignition[983]: INFO : Ignition 2.22.0 Sep 9 04:54:11.822011 ignition[983]: INFO : Stage: files Sep 9 04:54:11.823451 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:11.823451 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:11.823451 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 9 04:54:11.826340 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 04:54:11.826340 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 04:54:11.828817 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 04:54:11.829901 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 04:54:11.829901 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 04:54:11.829363 unknown[983]: wrote ssh authorized keys file for user: core Sep 9 04:54:11.832984 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 04:54:11.832984 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 04:54:11.884466 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 04:54:12.205505 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 04:54:12.205505 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:54:12.208837 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 04:54:12.219782 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 04:54:12.334974 systemd-networkd[805]: eth0: Gained IPv6LL Sep 9 04:54:12.613233 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 04:54:13.046796 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 04:54:13.046796 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 04:54:13.050423 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 04:54:13.066580 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 04:54:13.069728 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 04:54:13.070920 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 04:54:13.070920 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 04:54:13.070920 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 04:54:13.070920 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:54:13.070920 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:54:13.070920 ignition[983]: INFO : files: files passed Sep 9 04:54:13.070920 ignition[983]: INFO : Ignition finished successfully Sep 9 04:54:13.073892 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 04:54:13.076847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 04:54:13.079310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 04:54:13.090753 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 04:54:13.090854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 04:54:13.093513 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 04:54:13.094811 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:13.094811 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:13.097344 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:13.097054 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:54:13.098502 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 04:54:13.101051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 04:54:13.132501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 04:54:13.132649 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 04:54:13.134447 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 04:54:13.135892 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 04:54:13.137434 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 04:54:13.138266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 04:54:13.152035 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:54:13.154219 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 04:54:13.177989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:13.179737 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:13.180741 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 04:54:13.182242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 04:54:13.182373 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:54:13.184410 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 04:54:13.186103 systemd[1]: Stopped target basic.target - Basic System. Sep 9 04:54:13.187477 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 04:54:13.188906 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:54:13.190564 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 04:54:13.192147 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:54:13.193707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 04:54:13.195386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:54:13.196949 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 04:54:13.198584 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 04:54:13.200032 systemd[1]: Stopped target swap.target - Swaps. Sep 9 04:54:13.201267 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 04:54:13.201396 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:54:13.203335 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:13.204992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:13.206549 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 04:54:13.209769 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:13.210820 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 04:54:13.210945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 04:54:13.213339 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 04:54:13.213461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:54:13.215227 systemd[1]: Stopped target paths.target - Path Units. Sep 9 04:54:13.216504 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 04:54:13.219755 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:13.220822 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 04:54:13.222660 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 04:54:13.224037 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 04:54:13.224127 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:54:13.225485 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 04:54:13.225563 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:54:13.226918 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 04:54:13.227035 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:54:13.228679 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 04:54:13.228805 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 04:54:13.230940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 04:54:13.233198 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 04:54:13.234300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 04:54:13.234424 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:13.236022 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 04:54:13.236114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:54:13.241359 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 04:54:13.241437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 04:54:13.249743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 04:54:13.254851 ignition[1039]: INFO : Ignition 2.22.0 Sep 9 04:54:13.254851 ignition[1039]: INFO : Stage: umount Sep 9 04:54:13.257349 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:13.257349 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 04:54:13.257349 ignition[1039]: INFO : umount: umount passed Sep 9 04:54:13.257349 ignition[1039]: INFO : Ignition finished successfully Sep 9 04:54:13.258864 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 04:54:13.258986 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 04:54:13.263093 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 04:54:13.264713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 04:54:13.266361 systemd[1]: Stopped target network.target - Network. Sep 9 04:54:13.267553 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 04:54:13.267627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 04:54:13.269173 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 04:54:13.269219 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 04:54:13.270480 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 04:54:13.270521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 04:54:13.271811 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 04:54:13.271851 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 04:54:13.273302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 04:54:13.273363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 04:54:13.275049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 04:54:13.276445 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 04:54:13.286070 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 04:54:13.286167 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 04:54:13.288826 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 04:54:13.289435 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 04:54:13.289504 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:13.296890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:54:13.297091 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 04:54:13.297202 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 04:54:13.300592 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 04:54:13.301027 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 04:54:13.303642 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 04:54:13.303681 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:13.306259 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 04:54:13.309421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 04:54:13.309484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:54:13.310705 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 04:54:13.310750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:13.313581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 04:54:13.313625 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:13.314887 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:13.319186 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 04:54:13.335389 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 04:54:13.335545 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:13.337656 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 04:54:13.337749 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 04:54:13.339554 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 04:54:13.339666 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:13.341675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 04:54:13.341728 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:13.343215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 04:54:13.343266 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:54:13.345486 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 04:54:13.345529 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 04:54:13.347813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 04:54:13.347863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:54:13.351180 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 04:54:13.352867 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 04:54:13.352926 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:13.355743 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 04:54:13.355791 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:13.358374 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 04:54:13.358420 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:54:13.361273 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 04:54:13.361315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:13.363285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:13.363332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:13.368788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 04:54:13.368897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 04:54:13.370847 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 04:54:13.373077 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 04:54:13.388665 systemd[1]: Switching root. Sep 9 04:54:13.432131 systemd-journald[243]: Journal stopped Sep 9 04:54:14.176530 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 9 04:54:14.176653 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 04:54:14.176674 kernel: SELinux: policy capability open_perms=1 Sep 9 04:54:14.176684 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 04:54:14.176709 kernel: SELinux: policy capability always_check_network=0 Sep 9 04:54:14.176721 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 04:54:14.176730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 04:54:14.176740 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 04:54:14.176749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 04:54:14.176758 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 04:54:14.176771 kernel: audit: type=1403 audit(1757393653.616:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 04:54:14.176789 systemd[1]: Successfully loaded SELinux policy in 57.257ms. Sep 9 04:54:14.176802 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.288ms. Sep 9 04:54:14.176815 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:54:14.176825 systemd[1]: Detected virtualization kvm. Sep 9 04:54:14.176835 systemd[1]: Detected architecture arm64. Sep 9 04:54:14.176845 systemd[1]: Detected first boot. Sep 9 04:54:14.176859 systemd[1]: Initializing machine ID from VM UUID. Sep 9 04:54:14.176869 zram_generator::config[1084]: No configuration found. Sep 9 04:54:14.176880 kernel: NET: Registered PF_VSOCK protocol family Sep 9 04:54:14.176889 systemd[1]: Populated /etc with preset unit settings. Sep 9 04:54:14.176901 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 04:54:14.176912 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 04:54:14.176921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 04:54:14.176932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 04:54:14.176942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 04:54:14.176952 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 04:54:14.176962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 04:54:14.176971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 04:54:14.176982 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 04:54:14.176994 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 04:54:14.177004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 04:54:14.177014 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 04:54:14.177024 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:14.177035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:14.177045 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 04:54:14.177055 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 04:54:14.177065 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 04:54:14.177076 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:54:14.177086 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 04:54:14.177096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:14.177106 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:14.177116 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 04:54:14.177126 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 04:54:14.177136 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 04:54:14.177146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 04:54:14.177157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:14.177167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:54:14.177178 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:54:14.177188 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:54:14.177198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 04:54:14.177207 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 04:54:14.177217 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 04:54:14.177227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:14.177237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:14.177248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:14.177258 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 04:54:14.177268 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 04:54:14.177278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 04:54:14.177288 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 04:54:14.177298 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 04:54:14.177308 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 04:54:14.177318 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 04:54:14.177328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 04:54:14.177339 systemd[1]: Reached target machines.target - Containers. Sep 9 04:54:14.177349 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 04:54:14.177359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:14.177373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:54:14.177383 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 04:54:14.177393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:14.177403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:54:14.177413 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:14.177423 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 04:54:14.177434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:14.177444 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 04:54:14.177454 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 04:54:14.177464 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 04:54:14.177474 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 04:54:14.177484 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 04:54:14.177495 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:14.177506 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:54:14.177517 kernel: loop: module loaded Sep 9 04:54:14.177526 kernel: fuse: init (API version 7.41) Sep 9 04:54:14.177536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:54:14.177546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:54:14.177555 kernel: ACPI: bus type drm_connector registered Sep 9 04:54:14.177575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 04:54:14.177587 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 04:54:14.177597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:54:14.177607 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 04:54:14.177619 systemd[1]: Stopped verity-setup.service. Sep 9 04:54:14.177630 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 04:54:14.177639 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 04:54:14.177681 systemd-journald[1152]: Collecting audit messages is disabled. Sep 9 04:54:14.177739 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 04:54:14.177751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 04:54:14.177761 systemd-journald[1152]: Journal started Sep 9 04:54:14.177782 systemd-journald[1152]: Runtime Journal (/run/log/journal/27c70f9417d0489e930af9a97ff3d5b6) is 6M, max 48.5M, 42.4M free. Sep 9 04:54:14.177820 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 04:54:13.970562 systemd[1]: Queued start job for default target multi-user.target. Sep 9 04:54:13.991574 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 04:54:13.991940 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 04:54:14.181015 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:54:14.181761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 04:54:14.182981 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 04:54:14.185705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:14.186964 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 04:54:14.187136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 04:54:14.188485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:14.188660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:14.190021 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:54:14.190184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:54:14.191373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:14.191530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:14.192855 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 04:54:14.193005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 04:54:14.194292 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:14.194441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:14.196770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:14.197991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:14.199377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 04:54:14.200834 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 04:54:14.212777 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:54:14.215057 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 04:54:14.216786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 04:54:14.217820 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 04:54:14.217875 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:54:14.219811 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 04:54:14.232874 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 04:54:14.233947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:14.235712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 04:54:14.237535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 04:54:14.238795 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:54:14.239941 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 04:54:14.241060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:54:14.242712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:54:14.247815 systemd-journald[1152]: Time spent on flushing to /var/log/journal/27c70f9417d0489e930af9a97ff3d5b6 is 21.679ms for 885 entries. Sep 9 04:54:14.247815 systemd-journald[1152]: System Journal (/var/log/journal/27c70f9417d0489e930af9a97ff3d5b6) is 8M, max 195.6M, 187.6M free. Sep 9 04:54:14.287763 systemd-journald[1152]: Received client request to flush runtime journal. Sep 9 04:54:14.287814 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 04:54:14.287833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 04:54:14.248792 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 04:54:14.254078 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 04:54:14.256792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:14.258295 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 04:54:14.260866 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 04:54:14.265732 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 04:54:14.268020 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 04:54:14.270583 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 04:54:14.284008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:14.291816 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 04:54:14.296720 kernel: loop1: detected capacity change from 0 to 100632 Sep 9 04:54:14.301721 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 04:54:14.304731 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 04:54:14.307421 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 04:54:14.307438 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 04:54:14.310776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:54:14.313352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 04:54:14.318717 kernel: loop2: detected capacity change from 0 to 119368 Sep 9 04:54:14.341115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 04:54:14.346444 kernel: loop3: detected capacity change from 0 to 211168 Sep 9 04:54:14.344904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:54:14.351998 kernel: loop4: detected capacity change from 0 to 100632 Sep 9 04:54:14.358711 kernel: loop5: detected capacity change from 0 to 119368 Sep 9 04:54:14.365477 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 04:54:14.365920 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 9 04:54:14.370623 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 04:54:14.370638 systemd[1]: Reloading... Sep 9 04:54:14.373180 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 9 04:54:14.373200 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 9 04:54:14.432712 zram_generator::config[1254]: No configuration found. Sep 9 04:54:14.533193 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 04:54:14.569864 systemd[1]: Reloading finished in 198 ms. Sep 9 04:54:14.605332 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 04:54:14.606714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 04:54:14.608135 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:14.616799 systemd[1]: Starting ensure-sysext.service... Sep 9 04:54:14.618469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:54:14.627857 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 9 04:54:14.627873 systemd[1]: Reloading... Sep 9 04:54:14.633315 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 04:54:14.633360 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 04:54:14.633589 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 04:54:14.633758 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 04:54:14.634292 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 04:54:14.634465 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 04:54:14.634514 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 04:54:14.637397 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:54:14.637412 systemd-tmpfiles[1287]: Skipping /boot Sep 9 04:54:14.643821 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:54:14.643835 systemd-tmpfiles[1287]: Skipping /boot Sep 9 04:54:14.683719 zram_generator::config[1314]: No configuration found. Sep 9 04:54:14.809573 systemd[1]: Reloading finished in 181 ms. Sep 9 04:54:14.827083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 04:54:14.843476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:14.850576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:54:14.852830 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 04:54:14.865151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 04:54:14.868144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:54:14.870932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:14.874904 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 04:54:14.878534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:14.884532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:14.886556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:14.889963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:14.890965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:14.891091 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:14.893672 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 04:54:14.895622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:14.897732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:14.900036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:14.900261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:14.902209 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 04:54:14.904187 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:14.904366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:14.911308 augenrules[1381]: No rules Sep 9 04:54:14.911871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:14.913299 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 9 04:54:14.915090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:14.917594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:14.925132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:14.926305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:14.926470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:14.928973 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 04:54:14.931254 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 04:54:14.932968 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:54:14.934773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:54:14.936035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:14.943218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 04:54:14.946280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:14.946459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:14.948265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:14.948852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:14.950483 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:14.950958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:14.953200 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 04:54:14.958722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 04:54:14.975537 systemd[1]: Finished ensure-sysext.service. Sep 9 04:54:14.983854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:54:14.984757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:14.985651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:14.987510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:54:14.998977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:15.001669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:15.002896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:15.002945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:15.004947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:54:15.025587 augenrules[1432]: /sbin/augenrules: No change Sep 9 04:54:15.025919 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 04:54:15.027352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 04:54:15.035333 augenrules[1460]: No rules Sep 9 04:54:15.043324 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:54:15.043854 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:54:15.045023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:15.046745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:15.048236 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:54:15.048371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:54:15.049833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:15.050038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:15.051516 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:15.051719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:15.060443 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 04:54:15.065136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 04:54:15.071859 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 04:54:15.072841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:54:15.072924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:54:15.097762 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 04:54:15.104598 systemd-resolved[1353]: Positive Trust Anchors: Sep 9 04:54:15.104615 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:54:15.104646 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:54:15.113230 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 9 04:54:15.114412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:54:15.115582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:15.115751 systemd-networkd[1442]: lo: Link UP Sep 9 04:54:15.115762 systemd-networkd[1442]: lo: Gained carrier Sep 9 04:54:15.116486 systemd-networkd[1442]: Enumeration completed Sep 9 04:54:15.116887 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:15.116894 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:54:15.117035 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:54:15.117414 systemd-networkd[1442]: eth0: Link UP Sep 9 04:54:15.117510 systemd-networkd[1442]: eth0: Gained carrier Sep 9 04:54:15.117528 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:15.118309 systemd[1]: Reached target network.target - Network. Sep 9 04:54:15.120466 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 04:54:15.122528 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 04:54:15.123582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 04:54:15.124730 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:54:15.125760 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 04:54:15.126829 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 04:54:15.128363 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 04:54:15.129402 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 04:54:15.129431 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:54:15.130553 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 04:54:15.131522 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 04:54:15.132535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 04:54:15.133553 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:54:15.134795 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 04:54:15.134984 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 04:54:15.136879 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 04:54:15.139048 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 04:54:15.140107 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 04:54:15.140838 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Sep 9 04:54:15.141050 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 04:54:15.141606 systemd-timesyncd[1453]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 04:54:15.141981 systemd-timesyncd[1453]: Initial clock synchronization to Tue 2025-09-09 04:54:15.256354 UTC. Sep 9 04:54:15.143520 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 04:54:15.144809 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 04:54:15.146246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 04:54:15.147259 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:54:15.148103 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:54:15.148937 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:54:15.148965 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:54:15.150892 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 04:54:15.152916 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 04:54:15.154868 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 04:54:15.156625 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 04:54:15.158323 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 04:54:15.159214 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 04:54:15.162225 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 04:54:15.164830 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 04:54:15.165074 jq[1491]: false Sep 9 04:54:15.167852 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 04:54:15.172928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 04:54:15.177818 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 04:54:15.180583 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 04:54:15.180979 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 04:54:15.183156 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 04:54:15.184854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 04:54:15.188723 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 04:54:15.191073 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 04:54:15.192270 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 04:54:15.192433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 04:54:15.194422 extend-filesystems[1492]: Found /dev/vda6 Sep 9 04:54:15.201851 extend-filesystems[1492]: Found /dev/vda9 Sep 9 04:54:15.205207 jq[1512]: true Sep 9 04:54:15.207840 extend-filesystems[1492]: Checking size of /dev/vda9 Sep 9 04:54:15.206082 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 04:54:15.206283 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 04:54:15.210004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 04:54:15.210181 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 04:54:15.219710 extend-filesystems[1492]: Resized partition /dev/vda9 Sep 9 04:54:15.222458 extend-filesystems[1533]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 04:54:15.227054 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 04:54:15.230638 update_engine[1510]: I20250909 04:54:15.230387 1510 main.cc:92] Flatcar Update Engine starting Sep 9 04:54:15.235318 jq[1524]: true Sep 9 04:54:15.237983 tar[1515]: linux-arm64/LICENSE Sep 9 04:54:15.237983 tar[1515]: linux-arm64/helm Sep 9 04:54:15.238759 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 04:54:15.255573 dbus-daemon[1488]: [system] SELinux support is enabled Sep 9 04:54:15.255756 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 04:54:15.259739 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 04:54:15.259765 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 04:54:15.261535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 04:54:15.261556 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 04:54:15.269407 systemd[1]: Started update-engine.service - Update Engine. Sep 9 04:54:15.269637 update_engine[1510]: I20250909 04:54:15.269593 1510 update_check_scheduler.cc:74] Next update check in 10m43s Sep 9 04:54:15.272982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:15.278263 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 04:54:15.280744 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 04:54:15.293931 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 04:54:15.293931 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 04:54:15.293931 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 04:54:15.299909 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Sep 9 04:54:15.296952 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 04:54:15.303896 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 04:54:15.319063 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:54:15.320031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 04:54:15.321997 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 04:54:15.397529 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 04:54:15.417047 containerd[1525]: time="2025-09-09T04:54:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 04:54:15.418562 containerd[1525]: time="2025-09-09T04:54:15.417320480Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 04:54:15.417589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:15.423531 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 04:54:15.424075 systemd-logind[1503]: New seat seat0. Sep 9 04:54:15.426825 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 04:54:15.433923 containerd[1525]: time="2025-09-09T04:54:15.433881440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.04µs" Sep 9 04:54:15.433923 containerd[1525]: time="2025-09-09T04:54:15.433915840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 04:54:15.434013 containerd[1525]: time="2025-09-09T04:54:15.433934080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 04:54:15.434096 containerd[1525]: time="2025-09-09T04:54:15.434073600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 04:54:15.434120 containerd[1525]: time="2025-09-09T04:54:15.434097040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 04:54:15.434138 containerd[1525]: time="2025-09-09T04:54:15.434124200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434188 containerd[1525]: time="2025-09-09T04:54:15.434171400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434188 containerd[1525]: time="2025-09-09T04:54:15.434185600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434400 containerd[1525]: time="2025-09-09T04:54:15.434377240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434400 containerd[1525]: time="2025-09-09T04:54:15.434398920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434442 containerd[1525]: time="2025-09-09T04:54:15.434409680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434512 containerd[1525]: time="2025-09-09T04:54:15.434492160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 04:54:15.434621 containerd[1525]: time="2025-09-09T04:54:15.434595280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 04:54:15.435747 containerd[1525]: time="2025-09-09T04:54:15.435710880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:54:15.435800 containerd[1525]: time="2025-09-09T04:54:15.435782680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:54:15.435834 containerd[1525]: time="2025-09-09T04:54:15.435803200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 04:54:15.435853 containerd[1525]: time="2025-09-09T04:54:15.435835200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 04:54:15.436145 containerd[1525]: time="2025-09-09T04:54:15.436127640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 04:54:15.436219 containerd[1525]: time="2025-09-09T04:54:15.436200560Z" level=info msg="metadata content store policy set" policy=shared Sep 9 04:54:15.439130 containerd[1525]: time="2025-09-09T04:54:15.439076480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 04:54:15.439130 containerd[1525]: time="2025-09-09T04:54:15.439127600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439141680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439153400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439169680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439181160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439192040Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 04:54:15.439211 containerd[1525]: time="2025-09-09T04:54:15.439202960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 04:54:15.439303 containerd[1525]: time="2025-09-09T04:54:15.439216800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 04:54:15.439303 containerd[1525]: time="2025-09-09T04:54:15.439229800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 04:54:15.439303 containerd[1525]: time="2025-09-09T04:54:15.439238840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 04:54:15.439303 containerd[1525]: time="2025-09-09T04:54:15.439250800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 04:54:15.439387 containerd[1525]: time="2025-09-09T04:54:15.439367120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 04:54:15.439419 containerd[1525]: time="2025-09-09T04:54:15.439395880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 04:54:15.439419 containerd[1525]: time="2025-09-09T04:54:15.439410200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439422680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439434880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439445200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439457040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439466520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439488800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439499600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 04:54:15.439511 containerd[1525]: time="2025-09-09T04:54:15.439512480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 04:54:15.439727 containerd[1525]: time="2025-09-09T04:54:15.439710360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 04:54:15.439749 containerd[1525]: time="2025-09-09T04:54:15.439730040Z" level=info msg="Start snapshots syncer" Sep 9 04:54:15.439783 containerd[1525]: time="2025-09-09T04:54:15.439757040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 04:54:15.440014 containerd[1525]: time="2025-09-09T04:54:15.439977440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 04:54:15.440108 containerd[1525]: time="2025-09-09T04:54:15.440024120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 04:54:15.440108 containerd[1525]: time="2025-09-09T04:54:15.440090720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 04:54:15.440248 containerd[1525]: time="2025-09-09T04:54:15.440193360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 04:54:15.440248 containerd[1525]: time="2025-09-09T04:54:15.440221280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 04:54:15.440248 containerd[1525]: time="2025-09-09T04:54:15.440239920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 04:54:15.440304 containerd[1525]: time="2025-09-09T04:54:15.440250760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 04:54:15.440304 containerd[1525]: time="2025-09-09T04:54:15.440265200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 04:54:15.440304 containerd[1525]: time="2025-09-09T04:54:15.440275640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 04:54:15.440304 containerd[1525]: time="2025-09-09T04:54:15.440290320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 04:54:15.440370 containerd[1525]: time="2025-09-09T04:54:15.440313080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 04:54:15.440370 containerd[1525]: time="2025-09-09T04:54:15.440327440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 04:54:15.440370 containerd[1525]: time="2025-09-09T04:54:15.440337120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 04:54:15.440416 containerd[1525]: time="2025-09-09T04:54:15.440371480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:54:15.440416 containerd[1525]: time="2025-09-09T04:54:15.440386880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:54:15.440416 containerd[1525]: time="2025-09-09T04:54:15.440395560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:54:15.440416 containerd[1525]: time="2025-09-09T04:54:15.440404320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:54:15.440416 containerd[1525]: time="2025-09-09T04:54:15.440411520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 04:54:15.440497 containerd[1525]: time="2025-09-09T04:54:15.440420080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 04:54:15.440497 containerd[1525]: time="2025-09-09T04:54:15.440430760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 04:54:15.440529 containerd[1525]: time="2025-09-09T04:54:15.440506000Z" level=info msg="runtime interface created" Sep 9 04:54:15.440529 containerd[1525]: time="2025-09-09T04:54:15.440510960Z" level=info msg="created NRI interface" Sep 9 04:54:15.440529 containerd[1525]: time="2025-09-09T04:54:15.440518280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 04:54:15.440529 containerd[1525]: time="2025-09-09T04:54:15.440528080Z" level=info msg="Connect containerd service" Sep 9 04:54:15.440606 containerd[1525]: time="2025-09-09T04:54:15.440553600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 04:54:15.441708 containerd[1525]: time="2025-09-09T04:54:15.441191240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:54:15.506397 containerd[1525]: time="2025-09-09T04:54:15.506271360Z" level=info msg="Start subscribing containerd event" Sep 9 04:54:15.506397 containerd[1525]: time="2025-09-09T04:54:15.506376120Z" level=info msg="Start recovering state" Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506500720Z" level=info msg="Start event monitor" Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506522080Z" level=info msg="Start cni network conf syncer for default" Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506543360Z" level=info msg="Start streaming server" Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506579760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506587920Z" level=info msg="runtime interface starting up..." Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506593200Z" level=info msg="starting plugins..." Sep 9 04:54:15.506629 containerd[1525]: time="2025-09-09T04:54:15.506607120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 04:54:15.506990 containerd[1525]: time="2025-09-09T04:54:15.506965640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 04:54:15.507150 containerd[1525]: time="2025-09-09T04:54:15.507132640Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 04:54:15.507390 containerd[1525]: time="2025-09-09T04:54:15.507375200Z" level=info msg="containerd successfully booted in 0.090964s" Sep 9 04:54:15.507414 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 04:54:15.596923 tar[1515]: linux-arm64/README.md Sep 9 04:54:15.612929 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 04:54:15.693281 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 04:54:15.712575 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 04:54:15.716177 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 04:54:15.735042 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 04:54:15.735285 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 04:54:15.737737 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 04:54:15.760754 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 04:54:15.764931 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 04:54:15.766926 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 04:54:15.768116 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 04:54:16.815329 systemd-networkd[1442]: eth0: Gained IPv6LL Sep 9 04:54:16.817945 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 04:54:16.819394 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 04:54:16.823089 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 04:54:16.825275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:16.827246 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 04:54:16.853940 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 04:54:16.855446 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 04:54:16.855681 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 04:54:16.857491 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 04:54:17.375520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:17.377179 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 04:54:17.381062 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:17.381815 systemd[1]: Startup finished in 2.005s (kernel) + 5.009s (initrd) + 3.822s (userspace) = 10.838s. Sep 9 04:54:17.748223 kubelet[1636]: E0909 04:54:17.748092 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:17.750391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:17.750523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:17.750875 systemd[1]: kubelet.service: Consumed 754ms CPU time, 259.1M memory peak. Sep 9 04:54:21.888101 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 04:54:21.889070 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:57190.service - OpenSSH per-connection server daemon (10.0.0.1:57190). Sep 9 04:54:21.970662 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 57190 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:21.972476 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:21.978221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 04:54:21.979162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 04:54:21.984157 systemd-logind[1503]: New session 1 of user core. Sep 9 04:54:21.997339 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 04:54:22.001121 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 04:54:22.020860 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 04:54:22.022997 systemd-logind[1503]: New session c1 of user core. Sep 9 04:54:22.132639 systemd[1654]: Queued start job for default target default.target. Sep 9 04:54:22.144652 systemd[1654]: Created slice app.slice - User Application Slice. Sep 9 04:54:22.144683 systemd[1654]: Reached target paths.target - Paths. Sep 9 04:54:22.144753 systemd[1654]: Reached target timers.target - Timers. Sep 9 04:54:22.145976 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 04:54:22.156268 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 04:54:22.156387 systemd[1654]: Reached target sockets.target - Sockets. Sep 9 04:54:22.156441 systemd[1654]: Reached target basic.target - Basic System. Sep 9 04:54:22.156469 systemd[1654]: Reached target default.target - Main User Target. Sep 9 04:54:22.156506 systemd[1654]: Startup finished in 128ms. Sep 9 04:54:22.156598 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 04:54:22.157895 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 04:54:22.222088 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:57200.service - OpenSSH per-connection server daemon (10.0.0.1:57200). Sep 9 04:54:22.274260 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 57200 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:22.275530 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:22.279782 systemd-logind[1503]: New session 2 of user core. Sep 9 04:54:22.290885 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 04:54:22.342508 sshd[1668]: Connection closed by 10.0.0.1 port 57200 Sep 9 04:54:22.342954 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:22.355658 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:57200.service: Deactivated successfully. Sep 9 04:54:22.358946 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 04:54:22.359679 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Sep 9 04:54:22.361706 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:57210.service - OpenSSH per-connection server daemon (10.0.0.1:57210). Sep 9 04:54:22.362207 systemd-logind[1503]: Removed session 2. Sep 9 04:54:22.415579 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 57210 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:22.418108 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:22.422500 systemd-logind[1503]: New session 3 of user core. Sep 9 04:54:22.427874 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 04:54:22.476214 sshd[1677]: Connection closed by 10.0.0.1 port 57210 Sep 9 04:54:22.476604 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:22.489637 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:57210.service: Deactivated successfully. Sep 9 04:54:22.492061 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 04:54:22.493424 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Sep 9 04:54:22.495787 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:57220.service - OpenSSH per-connection server daemon (10.0.0.1:57220). Sep 9 04:54:22.496869 systemd-logind[1503]: Removed session 3. Sep 9 04:54:22.556451 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 57220 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:22.557521 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:22.561061 systemd-logind[1503]: New session 4 of user core. Sep 9 04:54:22.577849 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 04:54:22.628740 sshd[1686]: Connection closed by 10.0.0.1 port 57220 Sep 9 04:54:22.629038 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:22.641746 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:57220.service: Deactivated successfully. Sep 9 04:54:22.644019 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 04:54:22.644648 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Sep 9 04:54:22.646796 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:57226.service - OpenSSH per-connection server daemon (10.0.0.1:57226). Sep 9 04:54:22.647219 systemd-logind[1503]: Removed session 4. Sep 9 04:54:22.693343 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 57226 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:22.694475 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:22.697934 systemd-logind[1503]: New session 5 of user core. Sep 9 04:54:22.704834 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 04:54:22.760379 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 04:54:22.760623 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:22.776599 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:22.779193 sshd[1695]: Connection closed by 10.0.0.1 port 57226 Sep 9 04:54:22.778469 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:22.790681 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:57226.service: Deactivated successfully. Sep 9 04:54:22.792112 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 04:54:22.792861 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Sep 9 04:54:22.795064 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:57230.service - OpenSSH per-connection server daemon (10.0.0.1:57230). Sep 9 04:54:22.796759 systemd-logind[1503]: Removed session 5. Sep 9 04:54:22.850967 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 57230 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:22.852119 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:22.856652 systemd-logind[1503]: New session 6 of user core. Sep 9 04:54:22.867902 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 04:54:22.919208 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 04:54:22.919788 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:22.999166 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:23.004154 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 04:54:23.004418 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:23.014014 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:54:23.049107 augenrules[1729]: No rules Sep 9 04:54:23.049958 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:54:23.050179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:54:23.051028 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:23.052635 sshd[1705]: Connection closed by 10.0.0.1 port 57230 Sep 9 04:54:23.052570 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:23.063525 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:57230.service: Deactivated successfully. Sep 9 04:54:23.064961 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 04:54:23.065594 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Sep 9 04:54:23.067097 systemd-logind[1503]: Removed session 6. Sep 9 04:54:23.068076 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). Sep 9 04:54:23.136160 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:54:23.137288 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:54:23.140710 systemd-logind[1503]: New session 7 of user core. Sep 9 04:54:23.152918 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 04:54:23.203184 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 04:54:23.203725 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:54:23.468637 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 04:54:23.487027 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 04:54:23.684372 dockerd[1763]: time="2025-09-09T04:54:23.684308780Z" level=info msg="Starting up" Sep 9 04:54:23.685135 dockerd[1763]: time="2025-09-09T04:54:23.685115863Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 04:54:23.695055 dockerd[1763]: time="2025-09-09T04:54:23.694905048Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 04:54:23.764296 dockerd[1763]: time="2025-09-09T04:54:23.764174114Z" level=info msg="Loading containers: start." Sep 9 04:54:23.781736 kernel: Initializing XFRM netlink socket Sep 9 04:54:23.975810 systemd-networkd[1442]: docker0: Link UP Sep 9 04:54:23.979384 dockerd[1763]: time="2025-09-09T04:54:23.979322418Z" level=info msg="Loading containers: done." Sep 9 04:54:23.993314 dockerd[1763]: time="2025-09-09T04:54:23.992971113Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 04:54:23.993314 dockerd[1763]: time="2025-09-09T04:54:23.993060816Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 04:54:23.993314 dockerd[1763]: time="2025-09-09T04:54:23.993143439Z" level=info msg="Initializing buildkit" Sep 9 04:54:24.014395 dockerd[1763]: time="2025-09-09T04:54:24.014295512Z" level=info msg="Completed buildkit initialization" Sep 9 04:54:24.021309 dockerd[1763]: time="2025-09-09T04:54:24.021263185Z" level=info msg="Daemon has completed initialization" Sep 9 04:54:24.021512 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 04:54:24.021825 dockerd[1763]: time="2025-09-09T04:54:24.021456333Z" level=info msg="API listen on /run/docker.sock" Sep 9 04:54:24.770463 containerd[1525]: time="2025-09-09T04:54:24.770424396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 04:54:25.283621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838024410.mount: Deactivated successfully. Sep 9 04:54:26.633143 containerd[1525]: time="2025-09-09T04:54:26.633084352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:26.633719 containerd[1525]: time="2025-09-09T04:54:26.633675455Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 9 04:54:26.634511 containerd[1525]: time="2025-09-09T04:54:26.634471650Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:26.637076 containerd[1525]: time="2025-09-09T04:54:26.637044769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:26.638766 containerd[1525]: time="2025-09-09T04:54:26.638732317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.868268055s" Sep 9 04:54:26.638805 containerd[1525]: time="2025-09-09T04:54:26.638770742Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 04:54:26.640015 containerd[1525]: time="2025-09-09T04:54:26.639990128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 04:54:27.799085 containerd[1525]: time="2025-09-09T04:54:27.799021824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:27.800090 containerd[1525]: time="2025-09-09T04:54:27.799764991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 9 04:54:27.800334 containerd[1525]: time="2025-09-09T04:54:27.800312514Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:27.802723 containerd[1525]: time="2025-09-09T04:54:27.802681475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:27.803846 containerd[1525]: time="2025-09-09T04:54:27.803736308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.163712898s" Sep 9 04:54:27.803846 containerd[1525]: time="2025-09-09T04:54:27.803765725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 04:54:27.804230 containerd[1525]: time="2025-09-09T04:54:27.804206175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 04:54:28.000908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 04:54:28.002266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:28.122027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:28.125339 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:28.159337 kubelet[2046]: E0909 04:54:28.159269 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:28.162964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:28.163091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:28.164807 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.3M memory peak. Sep 9 04:54:29.235707 containerd[1525]: time="2025-09-09T04:54:29.235655242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:29.236592 containerd[1525]: time="2025-09-09T04:54:29.236406658Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 9 04:54:29.237215 containerd[1525]: time="2025-09-09T04:54:29.237163888Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:29.239642 containerd[1525]: time="2025-09-09T04:54:29.239610461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:29.240599 containerd[1525]: time="2025-09-09T04:54:29.240566914Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.436319295s" Sep 9 04:54:29.240635 containerd[1525]: time="2025-09-09T04:54:29.240599637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 04:54:29.240977 containerd[1525]: time="2025-09-09T04:54:29.240954131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 04:54:30.255179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749727391.mount: Deactivated successfully. Sep 9 04:54:30.633539 containerd[1525]: time="2025-09-09T04:54:30.633426652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:30.634485 containerd[1525]: time="2025-09-09T04:54:30.634428504Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 04:54:30.635083 containerd[1525]: time="2025-09-09T04:54:30.635037650Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:30.636995 containerd[1525]: time="2025-09-09T04:54:30.636950474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:30.637714 containerd[1525]: time="2025-09-09T04:54:30.637440917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.39645531s" Sep 9 04:54:30.637714 containerd[1525]: time="2025-09-09T04:54:30.637473549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 04:54:30.637897 containerd[1525]: time="2025-09-09T04:54:30.637862368Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 04:54:31.327612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671943295.mount: Deactivated successfully. Sep 9 04:54:32.157157 containerd[1525]: time="2025-09-09T04:54:32.157105091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:32.157743 containerd[1525]: time="2025-09-09T04:54:32.157715083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 9 04:54:32.159312 containerd[1525]: time="2025-09-09T04:54:32.159275522Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:32.161419 containerd[1525]: time="2025-09-09T04:54:32.161385612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:32.162945 containerd[1525]: time="2025-09-09T04:54:32.162768271Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.524857564s" Sep 9 04:54:32.162945 containerd[1525]: time="2025-09-09T04:54:32.162821721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 04:54:32.163432 containerd[1525]: time="2025-09-09T04:54:32.163376821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 04:54:32.584022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200109324.mount: Deactivated successfully. Sep 9 04:54:32.588814 containerd[1525]: time="2025-09-09T04:54:32.588773144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:32.589235 containerd[1525]: time="2025-09-09T04:54:32.589207800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 04:54:32.590162 containerd[1525]: time="2025-09-09T04:54:32.590124991Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:32.592143 containerd[1525]: time="2025-09-09T04:54:32.592098089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:54:32.592921 containerd[1525]: time="2025-09-09T04:54:32.592651145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.121225ms" Sep 9 04:54:32.592921 containerd[1525]: time="2025-09-09T04:54:32.592682157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 04:54:32.593540 containerd[1525]: time="2025-09-09T04:54:32.593518893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 04:54:32.993210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882962626.mount: Deactivated successfully. Sep 9 04:54:34.954719 containerd[1525]: time="2025-09-09T04:54:34.954556235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:34.956509 containerd[1525]: time="2025-09-09T04:54:34.956451010Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 9 04:54:34.957938 containerd[1525]: time="2025-09-09T04:54:34.957907297Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:34.961130 containerd[1525]: time="2025-09-09T04:54:34.961096389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:34.963251 containerd[1525]: time="2025-09-09T04:54:34.963219420Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.369671362s" Sep 9 04:54:34.963352 containerd[1525]: time="2025-09-09T04:54:34.963336091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 04:54:38.413551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 04:54:38.414980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:38.579375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:38.583527 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:54:38.620204 kubelet[2209]: E0909 04:54:38.620137 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:54:38.623051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:54:38.623291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:54:38.624132 systemd[1]: kubelet.service: Consumed 142ms CPU time, 105.7M memory peak. Sep 9 04:54:40.150044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:40.150248 systemd[1]: kubelet.service: Consumed 142ms CPU time, 105.7M memory peak. Sep 9 04:54:40.152410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:40.175383 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Sep 9 04:54:40.175399 systemd[1]: Reloading... Sep 9 04:54:40.255764 zram_generator::config[2270]: No configuration found. Sep 9 04:54:40.443899 systemd[1]: Reloading finished in 268 ms. Sep 9 04:54:40.510290 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 04:54:40.510375 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 04:54:40.510621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:40.510676 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Sep 9 04:54:40.513419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:40.635148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:40.661151 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:54:40.695988 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:40.695988 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 04:54:40.695988 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:40.695988 kubelet[2312]: I0909 04:54:40.695967 2312 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:54:41.672724 kubelet[2312]: I0909 04:54:41.672671 2312 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 04:54:41.672724 kubelet[2312]: I0909 04:54:41.672716 2312 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:54:41.672941 kubelet[2312]: I0909 04:54:41.672926 2312 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 04:54:41.694671 kubelet[2312]: E0909 04:54:41.694610 2312 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 04:54:41.695802 kubelet[2312]: I0909 04:54:41.695232 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:54:41.705446 kubelet[2312]: I0909 04:54:41.705420 2312 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:54:41.708554 kubelet[2312]: I0909 04:54:41.708523 2312 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:54:41.709823 kubelet[2312]: I0909 04:54:41.709787 2312 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:54:41.710089 kubelet[2312]: I0909 04:54:41.709922 2312 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:54:41.710290 kubelet[2312]: I0909 04:54:41.710276 2312 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:54:41.710346 kubelet[2312]: I0909 04:54:41.710336 2312 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 04:54:41.711112 kubelet[2312]: I0909 04:54:41.711092 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:41.713814 kubelet[2312]: I0909 04:54:41.713665 2312 kubelet.go:480] "Attempting to sync node with API server" Sep 9 04:54:41.717461 kubelet[2312]: I0909 04:54:41.714325 2312 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:54:41.717461 kubelet[2312]: I0909 04:54:41.714527 2312 kubelet.go:386] "Adding apiserver pod source" Sep 9 04:54:41.717461 kubelet[2312]: I0909 04:54:41.714547 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:54:41.717891 kubelet[2312]: E0909 04:54:41.717843 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 04:54:41.718008 kubelet[2312]: E0909 04:54:41.717847 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 04:54:41.720534 kubelet[2312]: I0909 04:54:41.718133 2312 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:54:41.720534 kubelet[2312]: I0909 04:54:41.720491 2312 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 04:54:41.721421 kubelet[2312]: W0909 04:54:41.721391 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 04:54:41.724992 kubelet[2312]: I0909 04:54:41.724001 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 04:54:41.724992 kubelet[2312]: I0909 04:54:41.724966 2312 server.go:1289] "Started kubelet" Sep 9 04:54:41.726536 kubelet[2312]: I0909 04:54:41.726344 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:54:41.726695 kubelet[2312]: I0909 04:54:41.726598 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:54:41.726942 kubelet[2312]: I0909 04:54:41.726915 2312 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:54:41.726992 kubelet[2312]: I0909 04:54:41.726970 2312 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:54:41.728767 kubelet[2312]: I0909 04:54:41.728276 2312 server.go:317] "Adding debug handlers to kubelet server" Sep 9 04:54:41.731938 kubelet[2312]: E0909 04:54:41.729212 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863843e11bafb66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 04:54:41.724930918 +0000 UTC m=+1.060213596,LastTimestamp:2025-09-09 04:54:41.724930918 +0000 UTC m=+1.060213596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 04:54:41.731938 kubelet[2312]: I0909 04:54:41.730799 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:54:41.731938 kubelet[2312]: E0909 04:54:41.731028 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:41.731938 kubelet[2312]: I0909 04:54:41.731060 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 04:54:41.731938 kubelet[2312]: I0909 04:54:41.731320 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 04:54:41.731938 kubelet[2312]: I0909 04:54:41.731366 2312 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:54:41.732956 kubelet[2312]: E0909 04:54:41.732908 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 04:54:41.737722 kubelet[2312]: E0909 04:54:41.733837 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Sep 9 04:54:41.737722 kubelet[2312]: I0909 04:54:41.734029 2312 factory.go:223] Registration of the systemd container factory successfully Sep 9 04:54:41.737722 kubelet[2312]: E0909 04:54:41.734069 2312 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:54:41.737722 kubelet[2312]: I0909 04:54:41.734136 2312 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:54:41.737722 kubelet[2312]: I0909 04:54:41.735625 2312 factory.go:223] Registration of the containerd container factory successfully Sep 9 04:54:41.741146 kubelet[2312]: I0909 04:54:41.740959 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 04:54:41.746110 kubelet[2312]: I0909 04:54:41.746083 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 04:54:41.746110 kubelet[2312]: I0909 04:54:41.746101 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 04:54:41.746250 kubelet[2312]: I0909 04:54:41.746120 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:41.831953 kubelet[2312]: E0909 04:54:41.831913 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:41.854982 kubelet[2312]: I0909 04:54:41.854917 2312 policy_none.go:49] "None policy: Start" Sep 9 04:54:41.854982 kubelet[2312]: I0909 04:54:41.854946 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 04:54:41.854982 kubelet[2312]: I0909 04:54:41.854958 2312 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:54:41.857280 kubelet[2312]: I0909 04:54:41.857247 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 04:54:41.857456 kubelet[2312]: I0909 04:54:41.857398 2312 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 04:54:41.857456 kubelet[2312]: I0909 04:54:41.857423 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 04:54:41.857456 kubelet[2312]: I0909 04:54:41.857429 2312 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 04:54:41.857618 kubelet[2312]: E0909 04:54:41.857599 2312 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:54:41.858000 kubelet[2312]: E0909 04:54:41.857959 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 04:54:41.862923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 04:54:41.876653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 04:54:41.879930 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 04:54:41.901629 kubelet[2312]: E0909 04:54:41.901473 2312 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 04:54:41.901865 kubelet[2312]: I0909 04:54:41.901833 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:54:41.901897 kubelet[2312]: I0909 04:54:41.901853 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:54:41.902238 kubelet[2312]: I0909 04:54:41.902208 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:54:41.905487 kubelet[2312]: E0909 04:54:41.905460 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 04:54:41.905585 kubelet[2312]: E0909 04:54:41.905522 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 04:54:41.935505 kubelet[2312]: E0909 04:54:41.934835 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Sep 9 04:54:41.975057 systemd[1]: Created slice kubepods-burstable-pod9dac967acb3337ae8a154a48b4b93e7f.slice - libcontainer container kubepods-burstable-pod9dac967acb3337ae8a154a48b4b93e7f.slice. Sep 9 04:54:41.998548 kubelet[2312]: E0909 04:54:41.998468 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:42.001425 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 04:54:42.002920 kubelet[2312]: I0909 04:54:42.002870 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 04:54:42.004264 kubelet[2312]: E0909 04:54:42.003292 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 9 04:54:42.004264 kubelet[2312]: E0909 04:54:42.004194 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:42.028670 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 04:54:42.030674 kubelet[2312]: E0909 04:54:42.030465 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:42.133187 kubelet[2312]: I0909 04:54:42.133125 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:42.133291 kubelet[2312]: I0909 04:54:42.133214 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:42.133291 kubelet[2312]: I0909 04:54:42.133266 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:42.133363 kubelet[2312]: I0909 04:54:42.133293 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:42.133363 kubelet[2312]: I0909 04:54:42.133314 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:42.133363 kubelet[2312]: I0909 04:54:42.133329 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:42.133363 kubelet[2312]: I0909 04:54:42.133348 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:42.133442 kubelet[2312]: I0909 04:54:42.133365 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:42.133442 kubelet[2312]: I0909 04:54:42.133380 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:42.205383 kubelet[2312]: I0909 04:54:42.205145 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 04:54:42.205631 kubelet[2312]: E0909 04:54:42.205486 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 9 04:54:42.300755 containerd[1525]: time="2025-09-09T04:54:42.300396411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9dac967acb3337ae8a154a48b4b93e7f,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:42.305028 containerd[1525]: time="2025-09-09T04:54:42.304973810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:42.332452 containerd[1525]: time="2025-09-09T04:54:42.332405554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:42.333259 containerd[1525]: time="2025-09-09T04:54:42.332807973Z" level=info msg="connecting to shim 8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1" address="unix:///run/containerd/s/83afe87a108e4eaac7014076611791456d35db47248b764b8d9383fb65907776" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:42.335216 containerd[1525]: time="2025-09-09T04:54:42.335043369Z" level=info msg="connecting to shim 87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea" address="unix:///run/containerd/s/39b23e70378a45cca92f5dfc7b5abeb3aff70c30ad82aed02d1ef89e6766e3b8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:42.335704 kubelet[2312]: E0909 04:54:42.335640 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Sep 9 04:54:42.361465 containerd[1525]: time="2025-09-09T04:54:42.361420883Z" level=info msg="connecting to shim 2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5" address="unix:///run/containerd/s/4e9c1ca8efcaca9cc8c3de926d7a21eebe6ec50f2d885593ab31ceeed5ef0b90" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:42.375909 systemd[1]: Started cri-containerd-8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1.scope - libcontainer container 8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1. Sep 9 04:54:42.379179 systemd[1]: Started cri-containerd-87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea.scope - libcontainer container 87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea. Sep 9 04:54:42.385004 systemd[1]: Started cri-containerd-2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5.scope - libcontainer container 2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5. Sep 9 04:54:42.423048 containerd[1525]: time="2025-09-09T04:54:42.422998562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1\"" Sep 9 04:54:42.423427 containerd[1525]: time="2025-09-09T04:54:42.423376171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9dac967acb3337ae8a154a48b4b93e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea\"" Sep 9 04:54:42.430239 containerd[1525]: time="2025-09-09T04:54:42.430195329Z" level=info msg="CreateContainer within sandbox \"8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 04:54:42.432753 containerd[1525]: time="2025-09-09T04:54:42.432711851Z" level=info msg="CreateContainer within sandbox \"87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 04:54:42.433137 containerd[1525]: time="2025-09-09T04:54:42.433098183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5\"" Sep 9 04:54:42.436813 containerd[1525]: time="2025-09-09T04:54:42.436778263Z" level=info msg="CreateContainer within sandbox \"2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 04:54:42.443602 containerd[1525]: time="2025-09-09T04:54:42.442555837Z" level=info msg="Container 0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:42.444435 containerd[1525]: time="2025-09-09T04:54:42.444395857Z" level=info msg="Container 7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:42.445717 containerd[1525]: time="2025-09-09T04:54:42.445545569Z" level=info msg="Container 2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:42.450098 containerd[1525]: time="2025-09-09T04:54:42.450040012Z" level=info msg="CreateContainer within sandbox \"8bf7eb21506707bd6f640954e49087258b5a5fa994554a1e1f29ed81b86df8c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb\"" Sep 9 04:54:42.450882 containerd[1525]: time="2025-09-09T04:54:42.450851494Z" level=info msg="StartContainer for \"0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb\"" Sep 9 04:54:42.452709 containerd[1525]: time="2025-09-09T04:54:42.452373612Z" level=info msg="connecting to shim 0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb" address="unix:///run/containerd/s/83afe87a108e4eaac7014076611791456d35db47248b764b8d9383fb65907776" protocol=ttrpc version=3 Sep 9 04:54:42.454196 containerd[1525]: time="2025-09-09T04:54:42.454151324Z" level=info msg="CreateContainer within sandbox \"87f2b4d9d45558cbff1134dd9e9f312d48212b25478f8529ff8c5cc03f8940ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77\"" Sep 9 04:54:42.454670 containerd[1525]: time="2025-09-09T04:54:42.454641703Z" level=info msg="StartContainer for \"7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77\"" Sep 9 04:54:42.455591 containerd[1525]: time="2025-09-09T04:54:42.455498284Z" level=info msg="CreateContainer within sandbox \"2fa3f8236b64336d850b69a9462f7a007cae4ec8c58fde9d85499e7fb39a5ff5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9\"" Sep 9 04:54:42.456265 containerd[1525]: time="2025-09-09T04:54:42.455732749Z" level=info msg="connecting to shim 7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77" address="unix:///run/containerd/s/39b23e70378a45cca92f5dfc7b5abeb3aff70c30ad82aed02d1ef89e6766e3b8" protocol=ttrpc version=3 Sep 9 04:54:42.456265 containerd[1525]: time="2025-09-09T04:54:42.456103434Z" level=info msg="StartContainer for \"2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9\"" Sep 9 04:54:42.458075 containerd[1525]: time="2025-09-09T04:54:42.458023810Z" level=info msg="connecting to shim 2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9" address="unix:///run/containerd/s/4e9c1ca8efcaca9cc8c3de926d7a21eebe6ec50f2d885593ab31ceeed5ef0b90" protocol=ttrpc version=3 Sep 9 04:54:42.476880 systemd[1]: Started cri-containerd-0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb.scope - libcontainer container 0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb. Sep 9 04:54:42.482443 systemd[1]: Started cri-containerd-7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77.scope - libcontainer container 7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77. Sep 9 04:54:42.486303 systemd[1]: Started cri-containerd-2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9.scope - libcontainer container 2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9. Sep 9 04:54:42.521848 containerd[1525]: time="2025-09-09T04:54:42.521809433Z" level=info msg="StartContainer for \"0feb6723d1c6ad2974d1d04c90f8a6398407b6aac7d847ed74f4fc9e6437bafb\" returns successfully" Sep 9 04:54:42.535316 containerd[1525]: time="2025-09-09T04:54:42.535258986Z" level=info msg="StartContainer for \"7b0e5999fb4027d4edfee5e03840afe88eefa8bcd59dbe55ba58bca6ec6d9a77\" returns successfully" Sep 9 04:54:42.538164 containerd[1525]: time="2025-09-09T04:54:42.538123702Z" level=info msg="StartContainer for \"2ebd245f188e45f634e603cfa366b56300eb35732d4411610ba16a75abcd7ef9\" returns successfully" Sep 9 04:54:42.594445 kubelet[2312]: E0909 04:54:42.594393 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 04:54:42.607091 kubelet[2312]: I0909 04:54:42.607058 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 04:54:42.607465 kubelet[2312]: E0909 04:54:42.607417 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 9 04:54:42.864252 kubelet[2312]: E0909 04:54:42.863663 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:42.871003 kubelet[2312]: E0909 04:54:42.870914 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:42.874159 kubelet[2312]: E0909 04:54:42.874130 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:43.408775 kubelet[2312]: I0909 04:54:43.408736 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 04:54:43.877826 kubelet[2312]: E0909 04:54:43.877729 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:43.878108 kubelet[2312]: E0909 04:54:43.877842 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:43.929525 kubelet[2312]: E0909 04:54:43.929493 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 04:54:44.333615 kubelet[2312]: E0909 04:54:44.333280 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 04:54:44.367221 kubelet[2312]: E0909 04:54:44.367132 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863843e11bafb66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 04:54:41.724930918 +0000 UTC m=+1.060213596,LastTimestamp:2025-09-09 04:54:41.724930918 +0000 UTC m=+1.060213596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 04:54:44.420810 kubelet[2312]: I0909 04:54:44.420774 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 04:54:44.420810 kubelet[2312]: E0909 04:54:44.420811 2312 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 04:54:44.432927 kubelet[2312]: E0909 04:54:44.432898 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:44.534006 kubelet[2312]: E0909 04:54:44.533965 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:44.634706 kubelet[2312]: E0909 04:54:44.634572 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:44.718244 kubelet[2312]: I0909 04:54:44.718208 2312 apiserver.go:52] "Watching apiserver" Sep 9 04:54:44.731697 kubelet[2312]: I0909 04:54:44.731665 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 04:54:44.733810 kubelet[2312]: I0909 04:54:44.733784 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:44.739277 kubelet[2312]: E0909 04:54:44.739243 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:44.739277 kubelet[2312]: I0909 04:54:44.739275 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:44.741086 kubelet[2312]: E0909 04:54:44.741059 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:44.741137 kubelet[2312]: I0909 04:54:44.741089 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:44.742454 kubelet[2312]: E0909 04:54:44.742429 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:44.877817 kubelet[2312]: I0909 04:54:44.877790 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:44.879629 kubelet[2312]: E0909 04:54:44.879600 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:47.167968 systemd[1]: Reload requested from client PID 2599 ('systemctl') (unit session-7.scope)... Sep 9 04:54:47.167984 systemd[1]: Reloading... Sep 9 04:54:47.245782 zram_generator::config[2648]: No configuration found. Sep 9 04:54:47.406155 systemd[1]: Reloading finished in 237 ms. Sep 9 04:54:47.436277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:47.452121 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 04:54:47.452341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:47.452397 systemd[1]: kubelet.service: Consumed 1.457s CPU time, 130.3M memory peak. Sep 9 04:54:47.454600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:54:47.635139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:54:47.639837 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:54:47.680592 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:47.680592 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 04:54:47.680592 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:54:47.680956 kubelet[2684]: I0909 04:54:47.680669 2684 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:54:47.687012 kubelet[2684]: I0909 04:54:47.686110 2684 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 04:54:47.687012 kubelet[2684]: I0909 04:54:47.686133 2684 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:54:47.687012 kubelet[2684]: I0909 04:54:47.686518 2684 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 04:54:47.688948 kubelet[2684]: I0909 04:54:47.688929 2684 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 04:54:47.691192 kubelet[2684]: I0909 04:54:47.691160 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:54:47.694860 kubelet[2684]: I0909 04:54:47.694802 2684 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:54:47.697611 kubelet[2684]: I0909 04:54:47.697589 2684 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:54:47.697804 kubelet[2684]: I0909 04:54:47.697785 2684 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:54:47.697950 kubelet[2684]: I0909 04:54:47.697805 2684 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:54:47.698033 kubelet[2684]: I0909 04:54:47.697962 2684 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:54:47.698033 kubelet[2684]: I0909 04:54:47.697970 2684 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 04:54:47.698033 kubelet[2684]: I0909 04:54:47.698010 2684 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:47.698154 kubelet[2684]: I0909 04:54:47.698142 2684 kubelet.go:480] "Attempting to sync node with API server" Sep 9 04:54:47.698720 kubelet[2684]: I0909 04:54:47.698157 2684 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:54:47.698720 kubelet[2684]: I0909 04:54:47.698182 2684 kubelet.go:386] "Adding apiserver pod source" Sep 9 04:54:47.698720 kubelet[2684]: I0909 04:54:47.698638 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:54:47.699487 kubelet[2684]: I0909 04:54:47.699454 2684 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:54:47.700491 kubelet[2684]: I0909 04:54:47.700462 2684 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 04:54:47.702729 kubelet[2684]: I0909 04:54:47.702683 2684 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 04:54:47.702841 kubelet[2684]: I0909 04:54:47.702831 2684 server.go:1289] "Started kubelet" Sep 9 04:54:47.704573 kubelet[2684]: I0909 04:54:47.704551 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:54:47.704954 kubelet[2684]: I0909 04:54:47.704904 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:54:47.705220 kubelet[2684]: I0909 04:54:47.705202 2684 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:54:47.705310 kubelet[2684]: I0909 04:54:47.705199 2684 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:54:47.705430 kubelet[2684]: I0909 04:54:47.705400 2684 factory.go:223] Registration of the systemd container factory successfully Sep 9 04:54:47.705552 kubelet[2684]: I0909 04:54:47.705514 2684 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:54:47.708478 kubelet[2684]: I0909 04:54:47.708455 2684 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 04:54:47.709747 kubelet[2684]: I0909 04:54:47.708566 2684 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 04:54:47.709747 kubelet[2684]: E0909 04:54:47.708600 2684 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 04:54:47.709747 kubelet[2684]: I0909 04:54:47.708784 2684 server.go:317] "Adding debug handlers to kubelet server" Sep 9 04:54:47.710051 kubelet[2684]: I0909 04:54:47.710028 2684 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:54:47.712395 kubelet[2684]: I0909 04:54:47.711526 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:54:47.715841 kubelet[2684]: I0909 04:54:47.715536 2684 factory.go:223] Registration of the containerd container factory successfully Sep 9 04:54:47.737432 kubelet[2684]: I0909 04:54:47.737387 2684 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 04:54:47.738616 kubelet[2684]: I0909 04:54:47.738596 2684 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 04:54:47.739847 kubelet[2684]: I0909 04:54:47.738760 2684 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 04:54:47.739847 kubelet[2684]: I0909 04:54:47.738795 2684 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 04:54:47.739847 kubelet[2684]: I0909 04:54:47.738802 2684 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 04:54:47.739847 kubelet[2684]: E0909 04:54:47.738845 2684 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:54:47.757636 kubelet[2684]: I0909 04:54:47.757615 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 04:54:47.757754 kubelet[2684]: I0909 04:54:47.757720 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 04:54:47.757754 kubelet[2684]: I0909 04:54:47.757748 2684 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:54:47.757886 kubelet[2684]: I0909 04:54:47.757871 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 04:54:47.757908 kubelet[2684]: I0909 04:54:47.757887 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 04:54:47.757929 kubelet[2684]: I0909 04:54:47.757911 2684 policy_none.go:49] "None policy: Start" Sep 9 04:54:47.757929 kubelet[2684]: I0909 04:54:47.757927 2684 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 04:54:47.757966 kubelet[2684]: I0909 04:54:47.757937 2684 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:54:47.758044 kubelet[2684]: I0909 04:54:47.758034 2684 state_mem.go:75] "Updated machine memory state" Sep 9 04:54:47.761373 kubelet[2684]: E0909 04:54:47.761350 2684 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 04:54:47.761529 kubelet[2684]: I0909 04:54:47.761512 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:54:47.761559 kubelet[2684]: I0909 04:54:47.761529 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:54:47.762196 kubelet[2684]: I0909 04:54:47.761960 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:54:47.762941 kubelet[2684]: E0909 04:54:47.762918 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 04:54:47.840594 kubelet[2684]: I0909 04:54:47.840547 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:47.840880 kubelet[2684]: I0909 04:54:47.840559 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:47.840880 kubelet[2684]: I0909 04:54:47.840675 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:47.863618 kubelet[2684]: I0909 04:54:47.863587 2684 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 04:54:47.870105 kubelet[2684]: I0909 04:54:47.870079 2684 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 04:54:47.870190 kubelet[2684]: I0909 04:54:47.870166 2684 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 04:54:47.911744 kubelet[2684]: I0909 04:54:47.911707 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:47.911873 kubelet[2684]: I0909 04:54:47.911753 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:47.911873 kubelet[2684]: I0909 04:54:47.911775 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:47.911873 kubelet[2684]: I0909 04:54:47.911795 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:47.911944 kubelet[2684]: I0909 04:54:47.911863 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:47.911944 kubelet[2684]: I0909 04:54:47.911920 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:47.911986 kubelet[2684]: I0909 04:54:47.911957 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 04:54:47.911986 kubelet[2684]: I0909 04:54:47.911975 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dac967acb3337ae8a154a48b4b93e7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9dac967acb3337ae8a154a48b4b93e7f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:47.912026 kubelet[2684]: I0909 04:54:47.912004 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:48.699264 kubelet[2684]: I0909 04:54:48.699217 2684 apiserver.go:52] "Watching apiserver" Sep 9 04:54:48.709459 kubelet[2684]: I0909 04:54:48.709411 2684 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 04:54:48.751164 kubelet[2684]: I0909 04:54:48.751126 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:48.751164 kubelet[2684]: I0909 04:54:48.751181 2684 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:48.760445 kubelet[2684]: E0909 04:54:48.760401 2684 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 04:54:48.763264 kubelet[2684]: E0909 04:54:48.761814 2684 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 04:54:48.783135 kubelet[2684]: I0909 04:54:48.783067 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7830443759999999 podStartE2EDuration="1.783044376s" podCreationTimestamp="2025-09-09 04:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:48.782951917 +0000 UTC m=+1.139179433" watchObservedRunningTime="2025-09-09 04:54:48.783044376 +0000 UTC m=+1.139271892" Sep 9 04:54:48.783581 kubelet[2684]: I0909 04:54:48.783519 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.783506193 podStartE2EDuration="1.783506193s" podCreationTimestamp="2025-09-09 04:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:48.773569036 +0000 UTC m=+1.129796552" watchObservedRunningTime="2025-09-09 04:54:48.783506193 +0000 UTC m=+1.139733709" Sep 9 04:54:48.792098 kubelet[2684]: I0909 04:54:48.792034 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.792021812 podStartE2EDuration="1.792021812s" podCreationTimestamp="2025-09-09 04:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:48.7915826 +0000 UTC m=+1.147810076" watchObservedRunningTime="2025-09-09 04:54:48.792021812 +0000 UTC m=+1.148249328" Sep 9 04:54:51.564371 kubelet[2684]: I0909 04:54:51.564318 2684 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 04:54:51.564680 containerd[1525]: time="2025-09-09T04:54:51.564608462Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 04:54:51.564879 kubelet[2684]: I0909 04:54:51.564774 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 04:54:52.642538 systemd[1]: Created slice kubepods-besteffort-podec3b96b4_6b70_4ba4_888e_9680b08faf8a.slice - libcontainer container kubepods-besteffort-podec3b96b4_6b70_4ba4_888e_9680b08faf8a.slice. Sep 9 04:54:52.648174 kubelet[2684]: I0909 04:54:52.648044 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec3b96b4-6b70-4ba4-888e-9680b08faf8a-lib-modules\") pod \"kube-proxy-jvfs5\" (UID: \"ec3b96b4-6b70-4ba4-888e-9680b08faf8a\") " pod="kube-system/kube-proxy-jvfs5" Sep 9 04:54:52.648174 kubelet[2684]: I0909 04:54:52.648089 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qk47\" (UniqueName: \"kubernetes.io/projected/ec3b96b4-6b70-4ba4-888e-9680b08faf8a-kube-api-access-5qk47\") pod \"kube-proxy-jvfs5\" (UID: \"ec3b96b4-6b70-4ba4-888e-9680b08faf8a\") " pod="kube-system/kube-proxy-jvfs5" Sep 9 04:54:52.648174 kubelet[2684]: I0909 04:54:52.648114 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec3b96b4-6b70-4ba4-888e-9680b08faf8a-kube-proxy\") pod \"kube-proxy-jvfs5\" (UID: \"ec3b96b4-6b70-4ba4-888e-9680b08faf8a\") " pod="kube-system/kube-proxy-jvfs5" Sep 9 04:54:52.648174 kubelet[2684]: I0909 04:54:52.648131 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec3b96b4-6b70-4ba4-888e-9680b08faf8a-xtables-lock\") pod \"kube-proxy-jvfs5\" (UID: \"ec3b96b4-6b70-4ba4-888e-9680b08faf8a\") " pod="kube-system/kube-proxy-jvfs5" Sep 9 04:54:52.751843 systemd[1]: Created slice kubepods-besteffort-pod49f0c9a2_f448_4846_b59a_a0fe3ef99b4c.slice - libcontainer container kubepods-besteffort-pod49f0c9a2_f448_4846_b59a_a0fe3ef99b4c.slice. Sep 9 04:54:52.849386 kubelet[2684]: I0909 04:54:52.849334 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmvv\" (UniqueName: \"kubernetes.io/projected/49f0c9a2-f448-4846-b59a-a0fe3ef99b4c-kube-api-access-6nmvv\") pod \"tigera-operator-755d956888-j9st2\" (UID: \"49f0c9a2-f448-4846-b59a-a0fe3ef99b4c\") " pod="tigera-operator/tigera-operator-755d956888-j9st2" Sep 9 04:54:52.849386 kubelet[2684]: I0909 04:54:52.849383 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49f0c9a2-f448-4846-b59a-a0fe3ef99b4c-var-lib-calico\") pod \"tigera-operator-755d956888-j9st2\" (UID: \"49f0c9a2-f448-4846-b59a-a0fe3ef99b4c\") " pod="tigera-operator/tigera-operator-755d956888-j9st2" Sep 9 04:54:52.952257 containerd[1525]: time="2025-09-09T04:54:52.951932046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvfs5,Uid:ec3b96b4-6b70-4ba4-888e-9680b08faf8a,Namespace:kube-system,Attempt:0,}" Sep 9 04:54:52.968250 containerd[1525]: time="2025-09-09T04:54:52.968209760Z" level=info msg="connecting to shim 3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e" address="unix:///run/containerd/s/5b722679a93a01e4483552d7e6788854ca2fb96bdce2afb237a10c0d4b66b017" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:52.991876 systemd[1]: Started cri-containerd-3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e.scope - libcontainer container 3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e. Sep 9 04:54:53.016108 containerd[1525]: time="2025-09-09T04:54:53.016058959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvfs5,Uid:ec3b96b4-6b70-4ba4-888e-9680b08faf8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e\"" Sep 9 04:54:53.020935 containerd[1525]: time="2025-09-09T04:54:53.020886441Z" level=info msg="CreateContainer within sandbox \"3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 04:54:53.030740 containerd[1525]: time="2025-09-09T04:54:53.030191069Z" level=info msg="Container 0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:53.037249 containerd[1525]: time="2025-09-09T04:54:53.037198854Z" level=info msg="CreateContainer within sandbox \"3f5c7c864fd647c380597802ea36de467b0160030192a19f318dc148fa5e093e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff\"" Sep 9 04:54:53.037823 containerd[1525]: time="2025-09-09T04:54:53.037793908Z" level=info msg="StartContainer for \"0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff\"" Sep 9 04:54:53.039582 containerd[1525]: time="2025-09-09T04:54:53.039545664Z" level=info msg="connecting to shim 0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff" address="unix:///run/containerd/s/5b722679a93a01e4483552d7e6788854ca2fb96bdce2afb237a10c0d4b66b017" protocol=ttrpc version=3 Sep 9 04:54:53.057294 containerd[1525]: time="2025-09-09T04:54:53.057247177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j9st2,Uid:49f0c9a2-f448-4846-b59a-a0fe3ef99b4c,Namespace:tigera-operator,Attempt:0,}" Sep 9 04:54:53.062892 systemd[1]: Started cri-containerd-0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff.scope - libcontainer container 0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff. Sep 9 04:54:53.072384 containerd[1525]: time="2025-09-09T04:54:53.072328436Z" level=info msg="connecting to shim 0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a" address="unix:///run/containerd/s/6461dddb1fcc98da8df5d682b18881666de6b013fcf148b41dfba78791a6e957" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:54:53.097871 systemd[1]: Started cri-containerd-0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a.scope - libcontainer container 0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a. Sep 9 04:54:53.105806 containerd[1525]: time="2025-09-09T04:54:53.105673216Z" level=info msg="StartContainer for \"0a00677ddf6ec097c460cf18ad40ccd5b9b39225289633a51e290f17d045a6ff\" returns successfully" Sep 9 04:54:53.135530 containerd[1525]: time="2025-09-09T04:54:53.135469876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j9st2,Uid:49f0c9a2-f448-4846-b59a-a0fe3ef99b4c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a\"" Sep 9 04:54:53.137795 containerd[1525]: time="2025-09-09T04:54:53.137767798Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 04:54:53.775075 kubelet[2684]: I0909 04:54:53.775000 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jvfs5" podStartSLOduration=1.774982756 podStartE2EDuration="1.774982756s" podCreationTimestamp="2025-09-09 04:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:54:53.77475568 +0000 UTC m=+6.130983236" watchObservedRunningTime="2025-09-09 04:54:53.774982756 +0000 UTC m=+6.131210272" Sep 9 04:54:54.150994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248496730.mount: Deactivated successfully. Sep 9 04:54:54.570469 containerd[1525]: time="2025-09-09T04:54:54.570372196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:54.571843 containerd[1525]: time="2025-09-09T04:54:54.571583897Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 04:54:54.572532 containerd[1525]: time="2025-09-09T04:54:54.572500474Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:54.574448 containerd[1525]: time="2025-09-09T04:54:54.574414800Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:54:54.575372 containerd[1525]: time="2025-09-09T04:54:54.575325696Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.437438679s" Sep 9 04:54:54.575372 containerd[1525]: time="2025-09-09T04:54:54.575362141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 04:54:54.582592 containerd[1525]: time="2025-09-09T04:54:54.582561736Z" level=info msg="CreateContainer within sandbox \"0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 04:54:54.588501 containerd[1525]: time="2025-09-09T04:54:54.587968104Z" level=info msg="Container 6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:54:54.592302 containerd[1525]: time="2025-09-09T04:54:54.592264585Z" level=info msg="CreateContainer within sandbox \"0b920697eda7f14ed59a214df614090b991a2355bfe703c79fe80c72346edb5a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c\"" Sep 9 04:54:54.592714 containerd[1525]: time="2025-09-09T04:54:54.592693249Z" level=info msg="StartContainer for \"6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c\"" Sep 9 04:54:54.593464 containerd[1525]: time="2025-09-09T04:54:54.593378552Z" level=info msg="connecting to shim 6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c" address="unix:///run/containerd/s/6461dddb1fcc98da8df5d682b18881666de6b013fcf148b41dfba78791a6e957" protocol=ttrpc version=3 Sep 9 04:54:54.617839 systemd[1]: Started cri-containerd-6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c.scope - libcontainer container 6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c. Sep 9 04:54:54.651889 containerd[1525]: time="2025-09-09T04:54:54.651849044Z" level=info msg="StartContainer for \"6f55b348c2de44e2d68fb2ce4a67e271de3cfd50c2426dfc22c8e5be2237807c\" returns successfully" Sep 9 04:54:54.776759 kubelet[2684]: I0909 04:54:54.776666 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-j9st2" podStartSLOduration=1.335047736 podStartE2EDuration="2.776651802s" podCreationTimestamp="2025-09-09 04:54:52 +0000 UTC" firstStartedPulling="2025-09-09 04:54:53.137198589 +0000 UTC m=+5.493426105" lastFinishedPulling="2025-09-09 04:54:54.578802655 +0000 UTC m=+6.935030171" observedRunningTime="2025-09-09 04:54:54.776363118 +0000 UTC m=+7.132590634" watchObservedRunningTime="2025-09-09 04:54:54.776651802 +0000 UTC m=+7.132879318" Sep 9 04:54:56.775659 kubelet[2684]: E0909 04:54:56.775611 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:54:57.772458 kubelet[2684]: E0909 04:54:57.772320 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:54:59.350012 kubelet[2684]: E0909 04:54:59.349921 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:54:59.695245 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 9 04:54:59.698633 sshd[1741]: Connection closed by 10.0.0.1 port 57242 Sep 9 04:54:59.699076 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 9 04:54:59.703385 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:57242.service: Deactivated successfully. Sep 9 04:54:59.706150 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 04:54:59.706328 systemd[1]: session-7.scope: Consumed 6.968s CPU time, 210.8M memory peak. Sep 9 04:54:59.710888 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Sep 9 04:54:59.713891 systemd-logind[1503]: Removed session 7. Sep 9 04:55:00.544713 update_engine[1510]: I20250909 04:55:00.543752 1510 update_attempter.cc:509] Updating boot flags... Sep 9 04:55:01.249642 kubelet[2684]: E0909 04:55:01.249597 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:05.554902 systemd[1]: Created slice kubepods-besteffort-pod00834311_f1ad_48ec_b819_de117ba57ff9.slice - libcontainer container kubepods-besteffort-pod00834311_f1ad_48ec_b819_de117ba57ff9.slice. Sep 9 04:55:05.641436 kubelet[2684]: I0909 04:55:05.641389 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00834311-f1ad-48ec-b819-de117ba57ff9-typha-certs\") pod \"calico-typha-6d7bfcb557-5xmt2\" (UID: \"00834311-f1ad-48ec-b819-de117ba57ff9\") " pod="calico-system/calico-typha-6d7bfcb557-5xmt2" Sep 9 04:55:05.641436 kubelet[2684]: I0909 04:55:05.641436 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00834311-f1ad-48ec-b819-de117ba57ff9-tigera-ca-bundle\") pod \"calico-typha-6d7bfcb557-5xmt2\" (UID: \"00834311-f1ad-48ec-b819-de117ba57ff9\") " pod="calico-system/calico-typha-6d7bfcb557-5xmt2" Sep 9 04:55:05.641809 kubelet[2684]: I0909 04:55:05.641459 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f88g\" (UniqueName: \"kubernetes.io/projected/00834311-f1ad-48ec-b819-de117ba57ff9-kube-api-access-2f88g\") pod \"calico-typha-6d7bfcb557-5xmt2\" (UID: \"00834311-f1ad-48ec-b819-de117ba57ff9\") " pod="calico-system/calico-typha-6d7bfcb557-5xmt2" Sep 9 04:55:05.847240 systemd[1]: Created slice kubepods-besteffort-poddb272de9_6a9d_4eab_98b9_4d90b20c0729.slice - libcontainer container kubepods-besteffort-poddb272de9_6a9d_4eab_98b9_4d90b20c0729.slice. Sep 9 04:55:05.859484 kubelet[2684]: E0909 04:55:05.859438 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:05.862587 containerd[1525]: time="2025-09-09T04:55:05.861471262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7bfcb557-5xmt2,Uid:00834311-f1ad-48ec-b819-de117ba57ff9,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:05.893319 containerd[1525]: time="2025-09-09T04:55:05.893245890Z" level=info msg="connecting to shim 3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f" address="unix:///run/containerd/s/b63955b4eb71ed5a8ca423cbf9810b488fcfb210fb627cf3e9b6b447671b6fae" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:05.951334 kubelet[2684]: I0909 04:55:05.950790 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-flexvol-driver-host\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951334 kubelet[2684]: I0909 04:55:05.950855 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-policysync\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951334 kubelet[2684]: I0909 04:55:05.950879 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-var-lib-calico\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951334 kubelet[2684]: I0909 04:55:05.950901 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-var-run-calico\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951334 kubelet[2684]: I0909 04:55:05.950921 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-cni-bin-dir\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951519 kubelet[2684]: I0909 04:55:05.950935 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-cni-net-dir\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951519 kubelet[2684]: I0909 04:55:05.950961 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rh4\" (UniqueName: \"kubernetes.io/projected/db272de9-6a9d-4eab-98b9-4d90b20c0729-kube-api-access-s8rh4\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951519 kubelet[2684]: I0909 04:55:05.951008 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-xtables-lock\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951519 kubelet[2684]: I0909 04:55:05.951049 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db272de9-6a9d-4eab-98b9-4d90b20c0729-tigera-ca-bundle\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951519 kubelet[2684]: I0909 04:55:05.951086 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-lib-modules\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951610 kubelet[2684]: I0909 04:55:05.951107 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db272de9-6a9d-4eab-98b9-4d90b20c0729-node-certs\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.951610 kubelet[2684]: I0909 04:55:05.951124 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db272de9-6a9d-4eab-98b9-4d90b20c0729-cni-log-dir\") pod \"calico-node-f7n5v\" (UID: \"db272de9-6a9d-4eab-98b9-4d90b20c0729\") " pod="calico-system/calico-node-f7n5v" Sep 9 04:55:05.962857 systemd[1]: Started cri-containerd-3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f.scope - libcontainer container 3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f. Sep 9 04:55:06.022640 containerd[1525]: time="2025-09-09T04:55:06.022599671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7bfcb557-5xmt2,Uid:00834311-f1ad-48ec-b819-de117ba57ff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f\"" Sep 9 04:55:06.023472 kubelet[2684]: E0909 04:55:06.023452 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:06.025240 containerd[1525]: time="2025-09-09T04:55:06.024822132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 04:55:06.060491 kubelet[2684]: E0909 04:55:06.060455 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.060491 kubelet[2684]: W0909 04:55:06.060480 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.060605 kubelet[2684]: E0909 04:55:06.060507 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.070651 kubelet[2684]: E0909 04:55:06.070580 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.070651 kubelet[2684]: W0909 04:55:06.070598 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.070651 kubelet[2684]: E0909 04:55:06.070614 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.078740 kubelet[2684]: E0909 04:55:06.078682 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxqsx" podUID="a901a0c3-eff5-4d2f-bac6-0e583e580868" Sep 9 04:55:06.124957 kubelet[2684]: E0909 04:55:06.124859 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.124957 kubelet[2684]: W0909 04:55:06.124881 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.124957 kubelet[2684]: E0909 04:55:06.124900 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.125882 kubelet[2684]: E0909 04:55:06.125862 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.131362 kubelet[2684]: W0909 04:55:06.125880 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.131660 kubelet[2684]: E0909 04:55:06.131626 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.132056 kubelet[2684]: E0909 04:55:06.132032 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.132056 kubelet[2684]: W0909 04:55:06.132051 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.132126 kubelet[2684]: E0909 04:55:06.132063 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.132254 kubelet[2684]: E0909 04:55:06.132241 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.132254 kubelet[2684]: W0909 04:55:06.132253 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.132309 kubelet[2684]: E0909 04:55:06.132263 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.132493 kubelet[2684]: E0909 04:55:06.132478 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.132493 kubelet[2684]: W0909 04:55:06.132491 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.132560 kubelet[2684]: E0909 04:55:06.132500 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.132822 kubelet[2684]: E0909 04:55:06.132804 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.132855 kubelet[2684]: W0909 04:55:06.132822 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.132855 kubelet[2684]: E0909 04:55:06.132835 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.134289 kubelet[2684]: E0909 04:55:06.133981 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.134289 kubelet[2684]: W0909 04:55:06.134001 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.134289 kubelet[2684]: E0909 04:55:06.134015 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.134771 kubelet[2684]: E0909 04:55:06.134748 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.134771 kubelet[2684]: W0909 04:55:06.134765 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.134869 kubelet[2684]: E0909 04:55:06.134777 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.135177 kubelet[2684]: E0909 04:55:06.135145 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.135177 kubelet[2684]: W0909 04:55:06.135165 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.135177 kubelet[2684]: E0909 04:55:06.135176 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.135766 kubelet[2684]: E0909 04:55:06.135732 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.135766 kubelet[2684]: W0909 04:55:06.135750 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.135766 kubelet[2684]: E0909 04:55:06.135762 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.136901 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137362 kubelet[2684]: W0909 04:55:06.136928 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.136940 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.137114 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137362 kubelet[2684]: W0909 04:55:06.137123 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.137132 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.137296 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137362 kubelet[2684]: W0909 04:55:06.137307 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137362 kubelet[2684]: E0909 04:55:06.137315 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137569 kubelet[2684]: E0909 04:55:06.137473 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137569 kubelet[2684]: W0909 04:55:06.137483 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137569 kubelet[2684]: E0909 04:55:06.137492 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137666 kubelet[2684]: E0909 04:55:06.137643 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137666 kubelet[2684]: W0909 04:55:06.137658 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137741 kubelet[2684]: E0909 04:55:06.137668 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.137849 kubelet[2684]: E0909 04:55:06.137829 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.137849 kubelet[2684]: W0909 04:55:06.137844 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.137934 kubelet[2684]: E0909 04:55:06.137853 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.138146 kubelet[2684]: E0909 04:55:06.138128 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.138172 kubelet[2684]: W0909 04:55:06.138146 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.138172 kubelet[2684]: E0909 04:55:06.138156 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.138636 kubelet[2684]: E0909 04:55:06.138607 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.138636 kubelet[2684]: W0909 04:55:06.138627 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.138636 kubelet[2684]: E0909 04:55:06.138639 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.138888 kubelet[2684]: E0909 04:55:06.138870 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.138888 kubelet[2684]: W0909 04:55:06.138883 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.138951 kubelet[2684]: E0909 04:55:06.138892 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.139093 kubelet[2684]: E0909 04:55:06.139079 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.139093 kubelet[2684]: W0909 04:55:06.139091 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.139151 kubelet[2684]: E0909 04:55:06.139101 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.152296 containerd[1525]: time="2025-09-09T04:55:06.151844146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7n5v,Uid:db272de9-6a9d-4eab-98b9-4d90b20c0729,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:06.155477 kubelet[2684]: E0909 04:55:06.154611 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.155477 kubelet[2684]: W0909 04:55:06.154635 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.155477 kubelet[2684]: E0909 04:55:06.154655 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.155477 kubelet[2684]: I0909 04:55:06.154843 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a901a0c3-eff5-4d2f-bac6-0e583e580868-kubelet-dir\") pod \"csi-node-driver-kxqsx\" (UID: \"a901a0c3-eff5-4d2f-bac6-0e583e580868\") " pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:06.155477 kubelet[2684]: E0909 04:55:06.154904 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.155477 kubelet[2684]: W0909 04:55:06.154927 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.155477 kubelet[2684]: E0909 04:55:06.154938 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.155477 kubelet[2684]: E0909 04:55:06.155453 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.155477 kubelet[2684]: W0909 04:55:06.155465 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156018 kubelet[2684]: E0909 04:55:06.155476 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156018 kubelet[2684]: E0909 04:55:06.155658 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156018 kubelet[2684]: W0909 04:55:06.155666 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156018 kubelet[2684]: E0909 04:55:06.155675 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156018 kubelet[2684]: I0909 04:55:06.155723 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a901a0c3-eff5-4d2f-bac6-0e583e580868-varrun\") pod \"csi-node-driver-kxqsx\" (UID: \"a901a0c3-eff5-4d2f-bac6-0e583e580868\") " pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:06.156018 kubelet[2684]: E0909 04:55:06.155881 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156018 kubelet[2684]: W0909 04:55:06.155890 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156018 kubelet[2684]: E0909 04:55:06.155899 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156418 kubelet[2684]: I0909 04:55:06.155933 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8djv\" (UniqueName: \"kubernetes.io/projected/a901a0c3-eff5-4d2f-bac6-0e583e580868-kube-api-access-k8djv\") pod \"csi-node-driver-kxqsx\" (UID: \"a901a0c3-eff5-4d2f-bac6-0e583e580868\") " pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:06.156418 kubelet[2684]: E0909 04:55:06.156087 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156418 kubelet[2684]: W0909 04:55:06.156095 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156418 kubelet[2684]: E0909 04:55:06.156104 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156418 kubelet[2684]: E0909 04:55:06.156260 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156418 kubelet[2684]: W0909 04:55:06.156267 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156418 kubelet[2684]: E0909 04:55:06.156284 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156418 kubelet[2684]: E0909 04:55:06.156417 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156899 kubelet[2684]: W0909 04:55:06.156424 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156899 kubelet[2684]: E0909 04:55:06.156431 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156899 kubelet[2684]: E0909 04:55:06.156565 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156899 kubelet[2684]: W0909 04:55:06.156572 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156899 kubelet[2684]: E0909 04:55:06.156579 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156899 kubelet[2684]: E0909 04:55:06.156716 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.156899 kubelet[2684]: W0909 04:55:06.156723 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.156899 kubelet[2684]: E0909 04:55:06.156731 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.156899 kubelet[2684]: I0909 04:55:06.156754 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a901a0c3-eff5-4d2f-bac6-0e583e580868-registration-dir\") pod \"csi-node-driver-kxqsx\" (UID: \"a901a0c3-eff5-4d2f-bac6-0e583e580868\") " pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:06.158031 kubelet[2684]: E0909 04:55:06.157086 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.158031 kubelet[2684]: W0909 04:55:06.157099 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.158031 kubelet[2684]: E0909 04:55:06.157109 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.158031 kubelet[2684]: I0909 04:55:06.157134 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a901a0c3-eff5-4d2f-bac6-0e583e580868-socket-dir\") pod \"csi-node-driver-kxqsx\" (UID: \"a901a0c3-eff5-4d2f-bac6-0e583e580868\") " pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:06.158031 kubelet[2684]: E0909 04:55:06.157319 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.158031 kubelet[2684]: W0909 04:55:06.157328 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.158031 kubelet[2684]: E0909 04:55:06.157337 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.158031 kubelet[2684]: E0909 04:55:06.157509 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.158031 kubelet[2684]: W0909 04:55:06.157517 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.158342 kubelet[2684]: E0909 04:55:06.157524 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.158342 kubelet[2684]: E0909 04:55:06.157670 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.158342 kubelet[2684]: W0909 04:55:06.157678 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.158342 kubelet[2684]: E0909 04:55:06.157705 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.158342 kubelet[2684]: E0909 04:55:06.157849 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.158342 kubelet[2684]: W0909 04:55:06.157857 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.158342 kubelet[2684]: E0909 04:55:06.157865 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.193787 containerd[1525]: time="2025-09-09T04:55:06.193743835Z" level=info msg="connecting to shim bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca" address="unix:///run/containerd/s/c7f1cbb4502a9b9849988f6d3f7dca6213e14995a42e7ecb54db7df782c00560" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:06.225853 systemd[1]: Started cri-containerd-bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca.scope - libcontainer container bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca. Sep 9 04:55:06.254247 containerd[1525]: time="2025-09-09T04:55:06.254152030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7n5v,Uid:db272de9-6a9d-4eab-98b9-4d90b20c0729,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\"" Sep 9 04:55:06.257996 kubelet[2684]: E0909 04:55:06.257960 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.257996 kubelet[2684]: W0909 04:55:06.257984 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.258192 kubelet[2684]: E0909 04:55:06.258004 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.258454 kubelet[2684]: E0909 04:55:06.258437 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.258454 kubelet[2684]: W0909 04:55:06.258452 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.258503 kubelet[2684]: E0909 04:55:06.258464 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.258713 kubelet[2684]: E0909 04:55:06.258684 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.258713 kubelet[2684]: W0909 04:55:06.258711 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.258781 kubelet[2684]: E0909 04:55:06.258720 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.258894 kubelet[2684]: E0909 04:55:06.258878 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.258894 kubelet[2684]: W0909 04:55:06.258890 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.258987 kubelet[2684]: E0909 04:55:06.258900 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.259082 kubelet[2684]: E0909 04:55:06.259068 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.259082 kubelet[2684]: W0909 04:55:06.259081 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.259133 kubelet[2684]: E0909 04:55:06.259089 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.259337 kubelet[2684]: E0909 04:55:06.259316 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.259366 kubelet[2684]: W0909 04:55:06.259337 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.259366 kubelet[2684]: E0909 04:55:06.259346 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.259528 kubelet[2684]: E0909 04:55:06.259514 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.259528 kubelet[2684]: W0909 04:55:06.259527 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.259574 kubelet[2684]: E0909 04:55:06.259536 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.259730 kubelet[2684]: E0909 04:55:06.259715 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.259751 kubelet[2684]: W0909 04:55:06.259729 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.259751 kubelet[2684]: E0909 04:55:06.259739 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.259903 kubelet[2684]: E0909 04:55:06.259890 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.259939 kubelet[2684]: W0909 04:55:06.259904 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.259939 kubelet[2684]: E0909 04:55:06.259912 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.260127 kubelet[2684]: E0909 04:55:06.260114 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.260127 kubelet[2684]: W0909 04:55:06.260126 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.260177 kubelet[2684]: E0909 04:55:06.260135 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.260315 kubelet[2684]: E0909 04:55:06.260302 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.260343 kubelet[2684]: W0909 04:55:06.260314 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.260343 kubelet[2684]: E0909 04:55:06.260322 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.260500 kubelet[2684]: E0909 04:55:06.260486 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.260520 kubelet[2684]: W0909 04:55:06.260500 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.260520 kubelet[2684]: E0909 04:55:06.260517 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.260685 kubelet[2684]: E0909 04:55:06.260674 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.260721 kubelet[2684]: W0909 04:55:06.260704 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.260721 kubelet[2684]: E0909 04:55:06.260713 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.260898 kubelet[2684]: E0909 04:55:06.260886 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.260898 kubelet[2684]: W0909 04:55:06.260897 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.260945 kubelet[2684]: E0909 04:55:06.260905 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.261222 kubelet[2684]: E0909 04:55:06.261204 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.261253 kubelet[2684]: W0909 04:55:06.261222 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.261253 kubelet[2684]: E0909 04:55:06.261233 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.261615 kubelet[2684]: E0909 04:55:06.261596 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.261737 kubelet[2684]: W0909 04:55:06.261721 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.261769 kubelet[2684]: E0909 04:55:06.261742 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.261985 kubelet[2684]: E0909 04:55:06.261970 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.261985 kubelet[2684]: W0909 04:55:06.261986 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.262189 kubelet[2684]: E0909 04:55:06.261997 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.262598 kubelet[2684]: E0909 04:55:06.262581 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.262630 kubelet[2684]: W0909 04:55:06.262598 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.262630 kubelet[2684]: E0909 04:55:06.262611 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.262894 kubelet[2684]: E0909 04:55:06.262879 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.262989 kubelet[2684]: W0909 04:55:06.262896 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.262989 kubelet[2684]: E0909 04:55:06.262906 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.263126 kubelet[2684]: E0909 04:55:06.263109 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.263126 kubelet[2684]: W0909 04:55:06.263122 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.263169 kubelet[2684]: E0909 04:55:06.263131 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.263306 kubelet[2684]: E0909 04:55:06.263294 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.263333 kubelet[2684]: W0909 04:55:06.263305 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.263333 kubelet[2684]: E0909 04:55:06.263314 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.263656 kubelet[2684]: E0909 04:55:06.263639 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.263693 kubelet[2684]: W0909 04:55:06.263656 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.263693 kubelet[2684]: E0909 04:55:06.263667 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.263883 kubelet[2684]: E0909 04:55:06.263869 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.263883 kubelet[2684]: W0909 04:55:06.263881 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.263950 kubelet[2684]: E0909 04:55:06.263890 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.264051 kubelet[2684]: E0909 04:55:06.264037 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.264051 kubelet[2684]: W0909 04:55:06.264051 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.264099 kubelet[2684]: E0909 04:55:06.264059 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.264242 kubelet[2684]: E0909 04:55:06.264229 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.264270 kubelet[2684]: W0909 04:55:06.264242 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.264270 kubelet[2684]: E0909 04:55:06.264250 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:06.275047 kubelet[2684]: E0909 04:55:06.275021 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:06.275047 kubelet[2684]: W0909 04:55:06.275037 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:06.275047 kubelet[2684]: E0909 04:55:06.275048 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.052386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838707147.mount: Deactivated successfully. Sep 9 04:55:07.518213 containerd[1525]: time="2025-09-09T04:55:07.518157389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.528966 containerd[1525]: time="2025-09-09T04:55:07.528914625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 04:55:07.529922 containerd[1525]: time="2025-09-09T04:55:07.529884261Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.532579 containerd[1525]: time="2025-09-09T04:55:07.532537107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.507681812s" Sep 9 04:55:07.532631 containerd[1525]: time="2025-09-09T04:55:07.532578950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 04:55:07.533388 containerd[1525]: time="2025-09-09T04:55:07.533302927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 04:55:07.546380 containerd[1525]: time="2025-09-09T04:55:07.546028036Z" level=info msg="CreateContainer within sandbox \"3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 04:55:07.551464 containerd[1525]: time="2025-09-09T04:55:07.551426855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:07.576362 containerd[1525]: time="2025-09-09T04:55:07.576323630Z" level=info msg="Container 99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:07.679993 containerd[1525]: time="2025-09-09T04:55:07.679945485Z" level=info msg="CreateContainer within sandbox \"3cb17d166c376bd73df22300f1596d8924cbd3894cf24b769e8cddd828ab909f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07\"" Sep 9 04:55:07.680579 containerd[1525]: time="2025-09-09T04:55:07.680555212Z" level=info msg="StartContainer for \"99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07\"" Sep 9 04:55:07.681601 containerd[1525]: time="2025-09-09T04:55:07.681575731Z" level=info msg="connecting to shim 99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07" address="unix:///run/containerd/s/b63955b4eb71ed5a8ca423cbf9810b488fcfb210fb627cf3e9b6b447671b6fae" protocol=ttrpc version=3 Sep 9 04:55:07.697949 systemd[1]: Started cri-containerd-99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07.scope - libcontainer container 99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07. Sep 9 04:55:07.735139 containerd[1525]: time="2025-09-09T04:55:07.735096731Z" level=info msg="StartContainer for \"99ac9671d959a8817ae52ffabc5dd98f89bda5a9d2d2af1c82b65ee002b50d07\" returns successfully" Sep 9 04:55:07.739418 kubelet[2684]: E0909 04:55:07.739298 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxqsx" podUID="a901a0c3-eff5-4d2f-bac6-0e583e580868" Sep 9 04:55:07.805322 kubelet[2684]: E0909 04:55:07.805209 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:07.826848 kubelet[2684]: I0909 04:55:07.826785 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d7bfcb557-5xmt2" podStartSLOduration=1.317432552 podStartE2EDuration="2.826771657s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:06.023880375 +0000 UTC m=+18.380107891" lastFinishedPulling="2025-09-09 04:55:07.53321948 +0000 UTC m=+19.889446996" observedRunningTime="2025-09-09 04:55:07.826754016 +0000 UTC m=+20.182981532" watchObservedRunningTime="2025-09-09 04:55:07.826771657 +0000 UTC m=+20.182999173" Sep 9 04:55:07.849797 kubelet[2684]: E0909 04:55:07.849760 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.849797 kubelet[2684]: W0909 04:55:07.849783 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.849797 kubelet[2684]: E0909 04:55:07.849803 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.850645 kubelet[2684]: E0909 04:55:07.850627 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.850709 kubelet[2684]: W0909 04:55:07.850643 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.850747 kubelet[2684]: E0909 04:55:07.850713 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.850922 kubelet[2684]: E0909 04:55:07.850908 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.850922 kubelet[2684]: W0909 04:55:07.850922 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.850972 kubelet[2684]: E0909 04:55:07.850931 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.851110 kubelet[2684]: E0909 04:55:07.851094 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.851145 kubelet[2684]: W0909 04:55:07.851110 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.851145 kubelet[2684]: E0909 04:55:07.851120 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.851919 kubelet[2684]: E0909 04:55:07.851902 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.851919 kubelet[2684]: W0909 04:55:07.851917 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852008 kubelet[2684]: E0909 04:55:07.851929 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852094 kubelet[2684]: E0909 04:55:07.852081 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852094 kubelet[2684]: W0909 04:55:07.852092 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852101 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852217 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852756 kubelet[2684]: W0909 04:55:07.852224 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852231 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852402 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852756 kubelet[2684]: W0909 04:55:07.852410 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852419 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852552 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852756 kubelet[2684]: W0909 04:55:07.852572 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852756 kubelet[2684]: E0909 04:55:07.852580 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852953 kubelet[2684]: E0909 04:55:07.852750 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852953 kubelet[2684]: W0909 04:55:07.852758 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852953 kubelet[2684]: E0909 04:55:07.852767 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.852953 kubelet[2684]: E0909 04:55:07.852889 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.852953 kubelet[2684]: W0909 04:55:07.852896 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.852953 kubelet[2684]: E0909 04:55:07.852903 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.853156 kubelet[2684]: E0909 04:55:07.853050 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.853156 kubelet[2684]: W0909 04:55:07.853090 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.853156 kubelet[2684]: E0909 04:55:07.853102 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.853305 kubelet[2684]: E0909 04:55:07.853290 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.853305 kubelet[2684]: W0909 04:55:07.853302 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.853360 kubelet[2684]: E0909 04:55:07.853312 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.853452 kubelet[2684]: E0909 04:55:07.853432 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.853452 kubelet[2684]: W0909 04:55:07.853442 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.853512 kubelet[2684]: E0909 04:55:07.853449 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.853643 kubelet[2684]: E0909 04:55:07.853631 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.853643 kubelet[2684]: W0909 04:55:07.853641 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.853723 kubelet[2684]: E0909 04:55:07.853649 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.873729 kubelet[2684]: E0909 04:55:07.873311 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.873729 kubelet[2684]: W0909 04:55:07.873336 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.873729 kubelet[2684]: E0909 04:55:07.873355 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.873984 kubelet[2684]: E0909 04:55:07.873970 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.874340 kubelet[2684]: W0909 04:55:07.874320 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.874426 kubelet[2684]: E0909 04:55:07.874413 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.874713 kubelet[2684]: E0909 04:55:07.874700 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.874908 kubelet[2684]: W0909 04:55:07.874790 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.874908 kubelet[2684]: E0909 04:55:07.874806 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.875286 kubelet[2684]: E0909 04:55:07.875264 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.875535 kubelet[2684]: W0909 04:55:07.875394 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.875535 kubelet[2684]: E0909 04:55:07.875426 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.875657 kubelet[2684]: E0909 04:55:07.875645 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.875746 kubelet[2684]: W0909 04:55:07.875732 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.875930 kubelet[2684]: E0909 04:55:07.875819 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.876036 kubelet[2684]: E0909 04:55:07.876024 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.876103 kubelet[2684]: W0909 04:55:07.876090 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.876149 kubelet[2684]: E0909 04:55:07.876141 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.876468 kubelet[2684]: E0909 04:55:07.876359 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.876468 kubelet[2684]: W0909 04:55:07.876370 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.876468 kubelet[2684]: E0909 04:55:07.876379 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.876622 kubelet[2684]: E0909 04:55:07.876609 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.876673 kubelet[2684]: W0909 04:55:07.876663 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.876767 kubelet[2684]: E0909 04:55:07.876755 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.877010 kubelet[2684]: E0909 04:55:07.876995 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.877105 kubelet[2684]: W0909 04:55:07.877092 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.877161 kubelet[2684]: E0909 04:55:07.877150 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.877894 kubelet[2684]: E0909 04:55:07.877875 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.877971 kubelet[2684]: W0909 04:55:07.877959 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.878120 kubelet[2684]: E0909 04:55:07.878105 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.879574 kubelet[2684]: E0909 04:55:07.879477 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.879574 kubelet[2684]: W0909 04:55:07.879495 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.879574 kubelet[2684]: E0909 04:55:07.879508 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.879744 kubelet[2684]: E0909 04:55:07.879685 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.879744 kubelet[2684]: W0909 04:55:07.879712 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.879744 kubelet[2684]: E0909 04:55:07.879721 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.879912 kubelet[2684]: E0909 04:55:07.879890 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.879912 kubelet[2684]: W0909 04:55:07.879903 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.879912 kubelet[2684]: E0909 04:55:07.879911 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.880325 kubelet[2684]: E0909 04:55:07.880092 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.880325 kubelet[2684]: W0909 04:55:07.880103 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.880325 kubelet[2684]: E0909 04:55:07.880112 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.880466 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.881746 kubelet[2684]: W0909 04:55:07.880479 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.880488 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.880709 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.881746 kubelet[2684]: W0909 04:55:07.880718 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.880726 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.881202 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.881746 kubelet[2684]: W0909 04:55:07.881216 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.881226 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:07.881746 kubelet[2684]: E0909 04:55:07.881365 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:55:07.882035 kubelet[2684]: W0909 04:55:07.881371 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:55:07.882035 kubelet[2684]: E0909 04:55:07.881379 2684 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:55:08.444396 containerd[1525]: time="2025-09-09T04:55:08.444353752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:08.446418 containerd[1525]: time="2025-09-09T04:55:08.446291656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 04:55:08.448965 containerd[1525]: time="2025-09-09T04:55:08.448635670Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:08.451475 containerd[1525]: time="2025-09-09T04:55:08.451443239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:08.452091 containerd[1525]: time="2025-09-09T04:55:08.452056964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 918.724916ms" Sep 9 04:55:08.452128 containerd[1525]: time="2025-09-09T04:55:08.452095727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 04:55:08.456607 containerd[1525]: time="2025-09-09T04:55:08.456571860Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 04:55:08.465961 containerd[1525]: time="2025-09-09T04:55:08.465929635Z" level=info msg="Container 5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:08.467003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482039014.mount: Deactivated successfully. Sep 9 04:55:08.473167 containerd[1525]: time="2025-09-09T04:55:08.473126730Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\"" Sep 9 04:55:08.473641 containerd[1525]: time="2025-09-09T04:55:08.473607486Z" level=info msg="StartContainer for \"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\"" Sep 9 04:55:08.475288 containerd[1525]: time="2025-09-09T04:55:08.475249648Z" level=info msg="connecting to shim 5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb" address="unix:///run/containerd/s/c7f1cbb4502a9b9849988f6d3f7dca6213e14995a42e7ecb54db7df782c00560" protocol=ttrpc version=3 Sep 9 04:55:08.495843 systemd[1]: Started cri-containerd-5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb.scope - libcontainer container 5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb. Sep 9 04:55:08.532204 containerd[1525]: time="2025-09-09T04:55:08.532171078Z" level=info msg="StartContainer for \"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\" returns successfully" Sep 9 04:55:08.544216 systemd[1]: cri-containerd-5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb.scope: Deactivated successfully. Sep 9 04:55:08.561232 containerd[1525]: time="2025-09-09T04:55:08.561195595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\" id:\"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\" pid:3385 exited_at:{seconds:1757393708 nanos:560797486}" Sep 9 04:55:08.568998 containerd[1525]: time="2025-09-09T04:55:08.567997861Z" level=info msg="received exit event container_id:\"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\" id:\"5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb\" pid:3385 exited_at:{seconds:1757393708 nanos:560797486}" Sep 9 04:55:08.619629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5708c5e11ec0308b16b2669a32dff08170ab79374dc4854b7392cecc8661d7bb-rootfs.mount: Deactivated successfully. Sep 9 04:55:08.808945 kubelet[2684]: I0909 04:55:08.808921 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:08.809929 kubelet[2684]: E0909 04:55:08.809402 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:08.810594 containerd[1525]: time="2025-09-09T04:55:08.810553528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 04:55:09.739768 kubelet[2684]: E0909 04:55:09.739700 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxqsx" podUID="a901a0c3-eff5-4d2f-bac6-0e583e580868" Sep 9 04:55:11.696028 containerd[1525]: time="2025-09-09T04:55:11.695982829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:11.696544 containerd[1525]: time="2025-09-09T04:55:11.696512384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 04:55:11.697063 containerd[1525]: time="2025-09-09T04:55:11.697040218Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:11.698906 containerd[1525]: time="2025-09-09T04:55:11.698854857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:11.699818 containerd[1525]: time="2025-09-09T04:55:11.699782597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.889190947s" Sep 9 04:55:11.699818 containerd[1525]: time="2025-09-09T04:55:11.699814319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 04:55:11.710278 containerd[1525]: time="2025-09-09T04:55:11.710244121Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 04:55:11.718666 containerd[1525]: time="2025-09-09T04:55:11.718631909Z" level=info msg="Container db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:11.722497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029480408.mount: Deactivated successfully. Sep 9 04:55:11.727221 containerd[1525]: time="2025-09-09T04:55:11.727182187Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\"" Sep 9 04:55:11.727582 containerd[1525]: time="2025-09-09T04:55:11.727563972Z" level=info msg="StartContainer for \"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\"" Sep 9 04:55:11.729101 containerd[1525]: time="2025-09-09T04:55:11.729076231Z" level=info msg="connecting to shim db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806" address="unix:///run/containerd/s/c7f1cbb4502a9b9849988f6d3f7dca6213e14995a42e7ecb54db7df782c00560" protocol=ttrpc version=3 Sep 9 04:55:11.742236 kubelet[2684]: E0909 04:55:11.741963 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kxqsx" podUID="a901a0c3-eff5-4d2f-bac6-0e583e580868" Sep 9 04:55:11.753866 systemd[1]: Started cri-containerd-db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806.scope - libcontainer container db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806. Sep 9 04:55:11.801531 containerd[1525]: time="2025-09-09T04:55:11.801431958Z" level=info msg="StartContainer for \"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\" returns successfully" Sep 9 04:55:12.359919 systemd[1]: cri-containerd-db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806.scope: Deactivated successfully. Sep 9 04:55:12.360577 systemd[1]: cri-containerd-db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806.scope: Consumed 462ms CPU time, 181.7M memory peak, 2.2M read from disk, 165.8M written to disk. Sep 9 04:55:12.362423 containerd[1525]: time="2025-09-09T04:55:12.362380093Z" level=info msg="received exit event container_id:\"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\" id:\"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\" pid:3444 exited_at:{seconds:1757393712 nanos:362189481}" Sep 9 04:55:12.362845 containerd[1525]: time="2025-09-09T04:55:12.362707034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\" id:\"db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806\" pid:3444 exited_at:{seconds:1757393712 nanos:362189481}" Sep 9 04:55:12.380777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4801c5daa328c6aef59d84c28cebe3c526e1de9a91faad83218d03f4a6a806-rootfs.mount: Deactivated successfully. Sep 9 04:55:12.438589 kubelet[2684]: I0909 04:55:12.438348 2684 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 04:55:12.541862 systemd[1]: Created slice kubepods-besteffort-pod7ff54052_9d3a_4a07_b2cc_5224b0f09400.slice - libcontainer container kubepods-besteffort-pod7ff54052_9d3a_4a07_b2cc_5224b0f09400.slice. Sep 9 04:55:12.546444 systemd[1]: Created slice kubepods-burstable-podfd0d0e5e_8b8c_4bfe_8a6f_2809b358103f.slice - libcontainer container kubepods-burstable-podfd0d0e5e_8b8c_4bfe_8a6f_2809b358103f.slice. Sep 9 04:55:12.554906 systemd[1]: Created slice kubepods-burstable-pod0ebb790d_cd70_4281_9a1e_e3b05d5d8f12.slice - libcontainer container kubepods-burstable-pod0ebb790d_cd70_4281_9a1e_e3b05d5d8f12.slice. Sep 9 04:55:12.560899 systemd[1]: Created slice kubepods-besteffort-pode2065b54_c63f_4dcd_bb07_0067793e6ac4.slice - libcontainer container kubepods-besteffort-pode2065b54_c63f_4dcd_bb07_0067793e6ac4.slice. Sep 9 04:55:12.564834 systemd[1]: Created slice kubepods-besteffort-pod021b6fc0_a415_4ddc_be6a_74700eea851d.slice - libcontainer container kubepods-besteffort-pod021b6fc0_a415_4ddc_be6a_74700eea851d.slice. Sep 9 04:55:12.572079 systemd[1]: Created slice kubepods-besteffort-pod4b6b7fb0_4687_4580_8783_7975c90a032c.slice - libcontainer container kubepods-besteffort-pod4b6b7fb0_4687_4580_8783_7975c90a032c.slice. Sep 9 04:55:12.577671 systemd[1]: Created slice kubepods-besteffort-pod6253321b_a298_4910_b15c_2740cae64a22.slice - libcontainer container kubepods-besteffort-pod6253321b_a298_4910_b15c_2740cae64a22.slice. Sep 9 04:55:12.623933 kubelet[2684]: I0909 04:55:12.623782 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-backend-key-pair\") pod \"whisker-7db8cc854f-ghqlz\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " pod="calico-system/whisker-7db8cc854f-ghqlz" Sep 9 04:55:12.623933 kubelet[2684]: I0909 04:55:12.623836 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ebb790d-cd70-4281-9a1e-e3b05d5d8f12-config-volume\") pod \"coredns-674b8bbfcf-gfx6j\" (UID: \"0ebb790d-cd70-4281-9a1e-e3b05d5d8f12\") " pod="kube-system/coredns-674b8bbfcf-gfx6j" Sep 9 04:55:12.623933 kubelet[2684]: I0909 04:55:12.623857 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b6b7fb0-4687-4580-8783-7975c90a032c-calico-apiserver-certs\") pod \"calico-apiserver-54fbdb58df-pbfzq\" (UID: \"4b6b7fb0-4687-4580-8783-7975c90a032c\") " pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" Sep 9 04:55:12.623933 kubelet[2684]: I0909 04:55:12.623872 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsgcx\" (UniqueName: \"kubernetes.io/projected/0ebb790d-cd70-4281-9a1e-e3b05d5d8f12-kube-api-access-hsgcx\") pod \"coredns-674b8bbfcf-gfx6j\" (UID: \"0ebb790d-cd70-4281-9a1e-e3b05d5d8f12\") " pod="kube-system/coredns-674b8bbfcf-gfx6j" Sep 9 04:55:12.623933 kubelet[2684]: I0909 04:55:12.623890 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6253321b-a298-4910-b15c-2740cae64a22-goldmane-key-pair\") pod \"goldmane-54d579b49d-6pqbx\" (UID: \"6253321b-a298-4910-b15c-2740cae64a22\") " pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:12.624227 kubelet[2684]: I0909 04:55:12.623904 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2065b54-c63f-4dcd-bb07-0067793e6ac4-calico-apiserver-certs\") pod \"calico-apiserver-54fbdb58df-rn6pf\" (UID: \"e2065b54-c63f-4dcd-bb07-0067793e6ac4\") " pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" Sep 9 04:55:12.624227 kubelet[2684]: I0909 04:55:12.624072 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4b2q\" (UniqueName: \"kubernetes.io/projected/7ff54052-9d3a-4a07-b2cc-5224b0f09400-kube-api-access-q4b2q\") pod \"whisker-7db8cc854f-ghqlz\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " pod="calico-system/whisker-7db8cc854f-ghqlz" Sep 9 04:55:12.624227 kubelet[2684]: I0909 04:55:12.624110 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/021b6fc0-a415-4ddc-be6a-74700eea851d-tigera-ca-bundle\") pod \"calico-kube-controllers-78f4f87864-7qvh4\" (UID: \"021b6fc0-a415-4ddc-be6a-74700eea851d\") " pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" Sep 9 04:55:12.624227 kubelet[2684]: I0909 04:55:12.624127 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6253321b-a298-4910-b15c-2740cae64a22-config\") pod \"goldmane-54d579b49d-6pqbx\" (UID: \"6253321b-a298-4910-b15c-2740cae64a22\") " pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:12.624227 kubelet[2684]: I0909 04:55:12.624148 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gp4\" (UniqueName: \"kubernetes.io/projected/4b6b7fb0-4687-4580-8783-7975c90a032c-kube-api-access-j5gp4\") pod \"calico-apiserver-54fbdb58df-pbfzq\" (UID: \"4b6b7fb0-4687-4580-8783-7975c90a032c\") " pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" Sep 9 04:55:12.624359 kubelet[2684]: I0909 04:55:12.624162 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9mg\" (UniqueName: \"kubernetes.io/projected/e2065b54-c63f-4dcd-bb07-0067793e6ac4-kube-api-access-gx9mg\") pod \"calico-apiserver-54fbdb58df-rn6pf\" (UID: \"e2065b54-c63f-4dcd-bb07-0067793e6ac4\") " pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" Sep 9 04:55:12.624359 kubelet[2684]: I0909 04:55:12.624176 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f-config-volume\") pod \"coredns-674b8bbfcf-8wvr5\" (UID: \"fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f\") " pod="kube-system/coredns-674b8bbfcf-8wvr5" Sep 9 04:55:12.624359 kubelet[2684]: I0909 04:55:12.624195 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4b9\" (UniqueName: \"kubernetes.io/projected/021b6fc0-a415-4ddc-be6a-74700eea851d-kube-api-access-zg4b9\") pod \"calico-kube-controllers-78f4f87864-7qvh4\" (UID: \"021b6fc0-a415-4ddc-be6a-74700eea851d\") " pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" Sep 9 04:55:12.624359 kubelet[2684]: I0909 04:55:12.624221 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6253321b-a298-4910-b15c-2740cae64a22-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6pqbx\" (UID: \"6253321b-a298-4910-b15c-2740cae64a22\") " pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:12.624359 kubelet[2684]: I0909 04:55:12.624264 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2xx\" (UniqueName: \"kubernetes.io/projected/fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f-kube-api-access-9d2xx\") pod \"coredns-674b8bbfcf-8wvr5\" (UID: \"fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f\") " pod="kube-system/coredns-674b8bbfcf-8wvr5" Sep 9 04:55:12.624494 kubelet[2684]: I0909 04:55:12.624284 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-ca-bundle\") pod \"whisker-7db8cc854f-ghqlz\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " pod="calico-system/whisker-7db8cc854f-ghqlz" Sep 9 04:55:12.624494 kubelet[2684]: I0909 04:55:12.624410 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzb56\" (UniqueName: \"kubernetes.io/projected/6253321b-a298-4910-b15c-2740cae64a22-kube-api-access-xzb56\") pod \"goldmane-54d579b49d-6pqbx\" (UID: \"6253321b-a298-4910-b15c-2740cae64a22\") " pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:12.824157 containerd[1525]: time="2025-09-09T04:55:12.823833624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 04:55:12.845398 containerd[1525]: time="2025-09-09T04:55:12.845340892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7db8cc854f-ghqlz,Uid:7ff54052-9d3a-4a07-b2cc-5224b0f09400,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:12.854316 kubelet[2684]: E0909 04:55:12.849591 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:12.855121 containerd[1525]: time="2025-09-09T04:55:12.854915093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvr5,Uid:fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:12.858394 kubelet[2684]: E0909 04:55:12.858355 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:12.859536 containerd[1525]: time="2025-09-09T04:55:12.859497820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfx6j,Uid:0ebb790d-cd70-4281-9a1e-e3b05d5d8f12,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:12.865519 containerd[1525]: time="2025-09-09T04:55:12.865464794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-rn6pf,Uid:e2065b54-c63f-4dcd-bb07-0067793e6ac4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:12.869678 containerd[1525]: time="2025-09-09T04:55:12.869378159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4f87864-7qvh4,Uid:021b6fc0-a415-4ddc-be6a-74700eea851d,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:12.878785 containerd[1525]: time="2025-09-09T04:55:12.878343722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-pbfzq,Uid:4b6b7fb0-4687-4580-8783-7975c90a032c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:12.882336 containerd[1525]: time="2025-09-09T04:55:12.882300370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6pqbx,Uid:6253321b-a298-4910-b15c-2740cae64a22,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:12.973963 containerd[1525]: time="2025-09-09T04:55:12.973507408Z" level=error msg="Failed to destroy network for sandbox \"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.977314 containerd[1525]: time="2025-09-09T04:55:12.977261483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvr5,Uid:fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.979148 kubelet[2684]: E0909 04:55:12.979098 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.979275 kubelet[2684]: E0909 04:55:12.979180 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8wvr5" Sep 9 04:55:12.979275 kubelet[2684]: E0909 04:55:12.979207 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8wvr5" Sep 9 04:55:12.979325 kubelet[2684]: E0909 04:55:12.979265 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8wvr5_kube-system(fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8wvr5_kube-system(fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"681ac80e5c35788fd06f37261a372afb05f134d18252cc9c2afeedeed207a096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8wvr5" podUID="fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f" Sep 9 04:55:12.980095 containerd[1525]: time="2025-09-09T04:55:12.980058459Z" level=error msg="Failed to destroy network for sandbox \"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.984244 containerd[1525]: time="2025-09-09T04:55:12.984038468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7db8cc854f-ghqlz,Uid:7ff54052-9d3a-4a07-b2cc-5224b0f09400,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.984744 containerd[1525]: time="2025-09-09T04:55:12.984711630Z" level=error msg="Failed to destroy network for sandbox \"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.985523 kubelet[2684]: E0909 04:55:12.985477 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.985680 kubelet[2684]: E0909 04:55:12.985654 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7db8cc854f-ghqlz" Sep 9 04:55:12.985760 kubelet[2684]: E0909 04:55:12.985677 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7db8cc854f-ghqlz" Sep 9 04:55:12.985760 kubelet[2684]: E0909 04:55:12.985741 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7db8cc854f-ghqlz_calico-system(7ff54052-9d3a-4a07-b2cc-5224b0f09400)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7db8cc854f-ghqlz_calico-system(7ff54052-9d3a-4a07-b2cc-5224b0f09400)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0a9739cca4652c210bb5133a3cb17e2d558f1de08d46fffafa3aac7553992f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7db8cc854f-ghqlz" podUID="7ff54052-9d3a-4a07-b2cc-5224b0f09400" Sep 9 04:55:12.986883 containerd[1525]: time="2025-09-09T04:55:12.986814082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfx6j,Uid:0ebb790d-cd70-4281-9a1e-e3b05d5d8f12,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.987127 kubelet[2684]: E0909 04:55:12.987073 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:12.987283 kubelet[2684]: E0909 04:55:12.987125 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfx6j" Sep 9 04:55:12.987283 kubelet[2684]: E0909 04:55:12.987142 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfx6j" Sep 9 04:55:12.987283 kubelet[2684]: E0909 04:55:12.987199 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gfx6j_kube-system(0ebb790d-cd70-4281-9a1e-e3b05d5d8f12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gfx6j_kube-system(0ebb790d-cd70-4281-9a1e-e3b05d5d8f12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e491585a68a7237af47255fde2a6426edef8615d9ddd8c80140e958c4c64574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gfx6j" podUID="0ebb790d-cd70-4281-9a1e-e3b05d5d8f12" Sep 9 04:55:13.001778 containerd[1525]: time="2025-09-09T04:55:13.001737815Z" level=error msg="Failed to destroy network for sandbox \"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.003248 containerd[1525]: time="2025-09-09T04:55:13.003193663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-pbfzq,Uid:4b6b7fb0-4687-4580-8783-7975c90a032c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.003611 kubelet[2684]: E0909 04:55:13.003576 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.003667 kubelet[2684]: E0909 04:55:13.003634 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" Sep 9 04:55:13.003707 kubelet[2684]: E0909 04:55:13.003665 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" Sep 9 04:55:13.003756 kubelet[2684]: E0909 04:55:13.003730 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54fbdb58df-pbfzq_calico-apiserver(4b6b7fb0-4687-4580-8783-7975c90a032c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54fbdb58df-pbfzq_calico-apiserver(4b6b7fb0-4687-4580-8783-7975c90a032c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0032d095aef9bc8dd5e8e19e4cb08459be5f6744b157a93c942ac12ce19787cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" podUID="4b6b7fb0-4687-4580-8783-7975c90a032c" Sep 9 04:55:13.006949 containerd[1525]: time="2025-09-09T04:55:13.006841883Z" level=error msg="Failed to destroy network for sandbox \"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.007649 containerd[1525]: time="2025-09-09T04:55:13.007449439Z" level=error msg="Failed to destroy network for sandbox \"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.009314 containerd[1525]: time="2025-09-09T04:55:13.009277029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4f87864-7qvh4,Uid:021b6fc0-a415-4ddc-be6a-74700eea851d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.009503 kubelet[2684]: E0909 04:55:13.009465 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.009591 kubelet[2684]: E0909 04:55:13.009552 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" Sep 9 04:55:13.009591 kubelet[2684]: E0909 04:55:13.009573 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" Sep 9 04:55:13.009671 kubelet[2684]: E0909 04:55:13.009622 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f4f87864-7qvh4_calico-system(021b6fc0-a415-4ddc-be6a-74700eea851d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f4f87864-7qvh4_calico-system(021b6fc0-a415-4ddc-be6a-74700eea851d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b7b0a895b4504069fe17d56105c63c2d4693e77de1e71427c820ad36652bc78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" podUID="021b6fc0-a415-4ddc-be6a-74700eea851d" Sep 9 04:55:13.010098 containerd[1525]: time="2025-09-09T04:55:13.010054236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6pqbx,Uid:6253321b-a298-4910-b15c-2740cae64a22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.010323 kubelet[2684]: E0909 04:55:13.010294 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.010385 kubelet[2684]: E0909 04:55:13.010331 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:13.010385 kubelet[2684]: E0909 04:55:13.010363 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6pqbx" Sep 9 04:55:13.010456 kubelet[2684]: E0909 04:55:13.010426 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6pqbx_calico-system(6253321b-a298-4910-b15c-2740cae64a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6pqbx_calico-system(6253321b-a298-4910-b15c-2740cae64a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5068f3656db742c2471c00148bc32e244ce86da243ec12e546858662bda71203\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6pqbx" podUID="6253321b-a298-4910-b15c-2740cae64a22" Sep 9 04:55:13.016500 containerd[1525]: time="2025-09-09T04:55:13.016457942Z" level=error msg="Failed to destroy network for sandbox \"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.018065 containerd[1525]: time="2025-09-09T04:55:13.018032557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-rn6pf,Uid:e2065b54-c63f-4dcd-bb07-0067793e6ac4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.018216 kubelet[2684]: E0909 04:55:13.018185 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.018263 kubelet[2684]: E0909 04:55:13.018229 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" Sep 9 04:55:13.018263 kubelet[2684]: E0909 04:55:13.018251 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" Sep 9 04:55:13.018330 kubelet[2684]: E0909 04:55:13.018285 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54fbdb58df-rn6pf_calico-apiserver(e2065b54-c63f-4dcd-bb07-0067793e6ac4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54fbdb58df-rn6pf_calico-apiserver(e2065b54-c63f-4dcd-bb07-0067793e6ac4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f82a676fcf3e7774a2b5897e36c0995c197d929b7f673292fb5e347833a24822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" podUID="e2065b54-c63f-4dcd-bb07-0067793e6ac4" Sep 9 04:55:13.746012 systemd[1]: Created slice kubepods-besteffort-poda901a0c3_eff5_4d2f_bac6_0e583e580868.slice - libcontainer container kubepods-besteffort-poda901a0c3_eff5_4d2f_bac6_0e583e580868.slice. Sep 9 04:55:13.748197 containerd[1525]: time="2025-09-09T04:55:13.748163931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxqsx,Uid:a901a0c3-eff5-4d2f-bac6-0e583e580868,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:13.808412 containerd[1525]: time="2025-09-09T04:55:13.808365677Z" level=error msg="Failed to destroy network for sandbox \"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.810444 containerd[1525]: time="2025-09-09T04:55:13.810383438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxqsx,Uid:a901a0c3-eff5-4d2f-bac6-0e583e580868,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.810679 kubelet[2684]: E0909 04:55:13.810624 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:55:13.810679 kubelet[2684]: E0909 04:55:13.810698 2684 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:13.810795 kubelet[2684]: E0909 04:55:13.810719 2684 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kxqsx" Sep 9 04:55:13.810795 kubelet[2684]: E0909 04:55:13.810764 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kxqsx_calico-system(a901a0c3-eff5-4d2f-bac6-0e583e580868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kxqsx_calico-system(a901a0c3-eff5-4d2f-bac6-0e583e580868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a45701d10fbe07a11db5c9d60ea7e1cdd41e68be94e7e870b463396efcf110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kxqsx" podUID="a901a0c3-eff5-4d2f-bac6-0e583e580868" Sep 9 04:55:13.810846 systemd[1]: run-netns-cni\x2d91749878\x2d3d11\x2d348e\x2d1b3d\x2d1fc9fa0aebf9.mount: Deactivated successfully. Sep 9 04:55:16.812909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974981934.mount: Deactivated successfully. Sep 9 04:55:16.960707 containerd[1525]: time="2025-09-09T04:55:16.960638204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:16.961560 containerd[1525]: time="2025-09-09T04:55:16.961487970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 04:55:16.962489 containerd[1525]: time="2025-09-09T04:55:16.962451662Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:16.964282 containerd[1525]: time="2025-09-09T04:55:16.964243558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:16.964718 containerd[1525]: time="2025-09-09T04:55:16.964669621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.140794635s" Sep 9 04:55:16.964764 containerd[1525]: time="2025-09-09T04:55:16.964725664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 04:55:16.983834 containerd[1525]: time="2025-09-09T04:55:16.983772087Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 04:55:16.998272 containerd[1525]: time="2025-09-09T04:55:16.998222583Z" level=info msg="Container 4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:17.002663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955001679.mount: Deactivated successfully. Sep 9 04:55:17.008116 containerd[1525]: time="2025-09-09T04:55:17.008051777Z" level=info msg="CreateContainer within sandbox \"bd8e7c35f980941c6de186a055196cb17701ed9cada0f9a2b335f1964de62cca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f\"" Sep 9 04:55:17.011419 containerd[1525]: time="2025-09-09T04:55:17.011071174Z" level=info msg="StartContainer for \"4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f\"" Sep 9 04:55:17.018445 containerd[1525]: time="2025-09-09T04:55:17.018369392Z" level=info msg="connecting to shim 4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f" address="unix:///run/containerd/s/c7f1cbb4502a9b9849988f6d3f7dca6213e14995a42e7ecb54db7df782c00560" protocol=ttrpc version=3 Sep 9 04:55:17.060887 systemd[1]: Started cri-containerd-4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f.scope - libcontainer container 4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f. Sep 9 04:55:17.099071 containerd[1525]: time="2025-09-09T04:55:17.098972128Z" level=info msg="StartContainer for \"4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f\" returns successfully" Sep 9 04:55:17.229216 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 04:55:17.229347 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 04:55:17.457335 kubelet[2684]: I0909 04:55:17.457103 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-ca-bundle\") pod \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " Sep 9 04:55:17.457335 kubelet[2684]: I0909 04:55:17.457182 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4b2q\" (UniqueName: \"kubernetes.io/projected/7ff54052-9d3a-4a07-b2cc-5224b0f09400-kube-api-access-q4b2q\") pod \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " Sep 9 04:55:17.457335 kubelet[2684]: I0909 04:55:17.457216 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-backend-key-pair\") pod \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\" (UID: \"7ff54052-9d3a-4a07-b2cc-5224b0f09400\") " Sep 9 04:55:17.474076 kubelet[2684]: I0909 04:55:17.474023 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ff54052-9d3a-4a07-b2cc-5224b0f09400" (UID: "7ff54052-9d3a-4a07-b2cc-5224b0f09400"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 04:55:17.475525 kubelet[2684]: I0909 04:55:17.475480 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff54052-9d3a-4a07-b2cc-5224b0f09400-kube-api-access-q4b2q" (OuterVolumeSpecName: "kube-api-access-q4b2q") pod "7ff54052-9d3a-4a07-b2cc-5224b0f09400" (UID: "7ff54052-9d3a-4a07-b2cc-5224b0f09400"). InnerVolumeSpecName "kube-api-access-q4b2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 04:55:17.477777 kubelet[2684]: I0909 04:55:17.477731 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ff54052-9d3a-4a07-b2cc-5224b0f09400" (UID: "7ff54052-9d3a-4a07-b2cc-5224b0f09400"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 04:55:17.558396 kubelet[2684]: I0909 04:55:17.558347 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 04:55:17.558396 kubelet[2684]: I0909 04:55:17.558384 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4b2q\" (UniqueName: \"kubernetes.io/projected/7ff54052-9d3a-4a07-b2cc-5224b0f09400-kube-api-access-q4b2q\") on node \"localhost\" DevicePath \"\"" Sep 9 04:55:17.558396 kubelet[2684]: I0909 04:55:17.558394 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ff54052-9d3a-4a07-b2cc-5224b0f09400-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 04:55:17.749258 systemd[1]: Removed slice kubepods-besteffort-pod7ff54052_9d3a_4a07_b2cc_5224b0f09400.slice - libcontainer container kubepods-besteffort-pod7ff54052_9d3a_4a07_b2cc_5224b0f09400.slice. Sep 9 04:55:17.813682 systemd[1]: var-lib-kubelet-pods-7ff54052\x2d9d3a\x2d4a07\x2db2cc\x2d5224b0f09400-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4b2q.mount: Deactivated successfully. Sep 9 04:55:17.813890 systemd[1]: var-lib-kubelet-pods-7ff54052\x2d9d3a\x2d4a07\x2db2cc\x2d5224b0f09400-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 04:55:17.870576 kubelet[2684]: I0909 04:55:17.870487 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f7n5v" podStartSLOduration=2.160490807 podStartE2EDuration="12.870468018s" podCreationTimestamp="2025-09-09 04:55:05 +0000 UTC" firstStartedPulling="2025-09-09 04:55:06.255275801 +0000 UTC m=+18.611503317" lastFinishedPulling="2025-09-09 04:55:16.965253012 +0000 UTC m=+29.321480528" observedRunningTime="2025-09-09 04:55:17.857482505 +0000 UTC m=+30.213710021" watchObservedRunningTime="2025-09-09 04:55:17.870468018 +0000 UTC m=+30.226695534" Sep 9 04:55:17.935667 systemd[1]: Created slice kubepods-besteffort-pod31e36a59_535a_41cb_8990_4332af3d93d9.slice - libcontainer container kubepods-besteffort-pod31e36a59_535a_41cb_8990_4332af3d93d9.slice. Sep 9 04:55:17.960886 kubelet[2684]: I0909 04:55:17.960824 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31e36a59-535a-41cb-8990-4332af3d93d9-whisker-backend-key-pair\") pod \"whisker-bb496ffd8-2wxd2\" (UID: \"31e36a59-535a-41cb-8990-4332af3d93d9\") " pod="calico-system/whisker-bb496ffd8-2wxd2" Sep 9 04:55:17.960886 kubelet[2684]: I0909 04:55:17.960880 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zm4\" (UniqueName: \"kubernetes.io/projected/31e36a59-535a-41cb-8990-4332af3d93d9-kube-api-access-c5zm4\") pod \"whisker-bb496ffd8-2wxd2\" (UID: \"31e36a59-535a-41cb-8990-4332af3d93d9\") " pod="calico-system/whisker-bb496ffd8-2wxd2" Sep 9 04:55:17.961061 kubelet[2684]: I0909 04:55:17.960980 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31e36a59-535a-41cb-8990-4332af3d93d9-whisker-ca-bundle\") pod \"whisker-bb496ffd8-2wxd2\" (UID: \"31e36a59-535a-41cb-8990-4332af3d93d9\") " pod="calico-system/whisker-bb496ffd8-2wxd2" Sep 9 04:55:18.240924 containerd[1525]: time="2025-09-09T04:55:18.240878180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb496ffd8-2wxd2,Uid:31e36a59-535a-41cb-8990-4332af3d93d9,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:18.427447 systemd-networkd[1442]: cali27c12fa2d52: Link UP Sep 9 04:55:18.427752 systemd-networkd[1442]: cali27c12fa2d52: Gained carrier Sep 9 04:55:18.442806 containerd[1525]: 2025-09-09 04:55:18.263 [INFO][3819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:18.442806 containerd[1525]: 2025-09-09 04:55:18.312 [INFO][3819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bb496ffd8--2wxd2-eth0 whisker-bb496ffd8- calico-system 31e36a59-535a-41cb-8990-4332af3d93d9 919 0 2025-09-09 04:55:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bb496ffd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bb496ffd8-2wxd2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali27c12fa2d52 [] [] }} ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-" Sep 9 04:55:18.442806 containerd[1525]: 2025-09-09 04:55:18.312 [INFO][3819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.442806 containerd[1525]: 2025-09-09 04:55:18.381 [INFO][3834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" HandleID="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Workload="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.381 [INFO][3834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" HandleID="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Workload="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bb496ffd8-2wxd2", "timestamp":"2025-09-09 04:55:18.381119715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.381 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.381 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.381 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.391 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" host="localhost" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.397 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.402 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.404 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.407 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:18.443071 containerd[1525]: 2025-09-09 04:55:18.407 [INFO][3834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" host="localhost" Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.409 [INFO][3834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2 Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.413 [INFO][3834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" host="localhost" Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.418 [INFO][3834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" host="localhost" Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.419 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" host="localhost" Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.419 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:18.443261 containerd[1525]: 2025-09-09 04:55:18.419 [INFO][3834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" HandleID="k8s-pod-network.4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Workload="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.443370 containerd[1525]: 2025-09-09 04:55:18.421 [INFO][3819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bb496ffd8--2wxd2-eth0", GenerateName:"whisker-bb496ffd8-", Namespace:"calico-system", SelfLink:"", UID:"31e36a59-535a-41cb-8990-4332af3d93d9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bb496ffd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bb496ffd8-2wxd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali27c12fa2d52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:18.443370 containerd[1525]: 2025-09-09 04:55:18.421 [INFO][3819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.443431 containerd[1525]: 2025-09-09 04:55:18.421 [INFO][3819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27c12fa2d52 ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.443431 containerd[1525]: 2025-09-09 04:55:18.428 [INFO][3819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.443467 containerd[1525]: 2025-09-09 04:55:18.429 [INFO][3819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bb496ffd8--2wxd2-eth0", GenerateName:"whisker-bb496ffd8-", Namespace:"calico-system", SelfLink:"", UID:"31e36a59-535a-41cb-8990-4332af3d93d9", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bb496ffd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2", Pod:"whisker-bb496ffd8-2wxd2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali27c12fa2d52", MAC:"ae:1d:72:4b:d3:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:18.443510 containerd[1525]: 2025-09-09 04:55:18.440 [INFO][3819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" Namespace="calico-system" Pod="whisker-bb496ffd8-2wxd2" WorkloadEndpoint="localhost-k8s-whisker--bb496ffd8--2wxd2-eth0" Sep 9 04:55:18.629395 containerd[1525]: time="2025-09-09T04:55:18.629344092Z" level=info msg="connecting to shim 4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2" address="unix:///run/containerd/s/eb95f0462cee37b9735889f6f045649b77376670f44bd95a7ee6877c0084e53a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:18.700923 systemd[1]: Started cri-containerd-4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2.scope - libcontainer container 4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2. Sep 9 04:55:18.720787 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:18.761236 containerd[1525]: time="2025-09-09T04:55:18.761175206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb496ffd8-2wxd2,Uid:31e36a59-535a-41cb-8990-4332af3d93d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2\"" Sep 9 04:55:18.766614 containerd[1525]: time="2025-09-09T04:55:18.766581556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 04:55:18.839170 kubelet[2684]: I0909 04:55:18.839140 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:19.601777 systemd-networkd[1442]: cali27c12fa2d52: Gained IPv6LL Sep 9 04:55:19.744650 kubelet[2684]: I0909 04:55:19.744613 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff54052-9d3a-4a07-b2cc-5224b0f09400" path="/var/lib/kubelet/pods/7ff54052-9d3a-4a07-b2cc-5224b0f09400/volumes" Sep 9 04:55:19.908565 containerd[1525]: time="2025-09-09T04:55:19.908439794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 04:55:19.912887 containerd[1525]: time="2025-09-09T04:55:19.912823366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.144946864s" Sep 9 04:55:19.912997 containerd[1525]: time="2025-09-09T04:55:19.912882849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 04:55:19.913969 containerd[1525]: time="2025-09-09T04:55:19.913933260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:19.915984 containerd[1525]: time="2025-09-09T04:55:19.915264244Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:19.916638 containerd[1525]: time="2025-09-09T04:55:19.916598428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:19.917815 containerd[1525]: time="2025-09-09T04:55:19.917782286Z" level=info msg="CreateContainer within sandbox \"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 04:55:19.925430 containerd[1525]: time="2025-09-09T04:55:19.925383333Z" level=info msg="Container 210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:19.927614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751983948.mount: Deactivated successfully. Sep 9 04:55:19.932069 containerd[1525]: time="2025-09-09T04:55:19.932004213Z" level=info msg="CreateContainer within sandbox \"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac\"" Sep 9 04:55:19.932652 containerd[1525]: time="2025-09-09T04:55:19.932616603Z" level=info msg="StartContainer for \"210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac\"" Sep 9 04:55:19.942196 containerd[1525]: time="2025-09-09T04:55:19.942126343Z" level=info msg="connecting to shim 210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac" address="unix:///run/containerd/s/eb95f0462cee37b9735889f6f045649b77376670f44bd95a7ee6877c0084e53a" protocol=ttrpc version=3 Sep 9 04:55:19.969893 systemd[1]: Started cri-containerd-210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac.scope - libcontainer container 210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac. Sep 9 04:55:20.003553 containerd[1525]: time="2025-09-09T04:55:20.003444103Z" level=info msg="StartContainer for \"210cfe2b572fea8f98fcdfdf7ae7382672c9489a4081d40ebe83e3caedc6a6ac\" returns successfully" Sep 9 04:55:20.005110 containerd[1525]: time="2025-09-09T04:55:20.005078259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 04:55:21.561902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088467751.mount: Deactivated successfully. Sep 9 04:55:21.580423 containerd[1525]: time="2025-09-09T04:55:21.580383534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:21.581170 containerd[1525]: time="2025-09-09T04:55:21.581129288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 04:55:21.582159 containerd[1525]: time="2025-09-09T04:55:21.582114933Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:21.583864 containerd[1525]: time="2025-09-09T04:55:21.583836371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:21.585039 containerd[1525]: time="2025-09-09T04:55:21.585004024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.579868962s" Sep 9 04:55:21.585081 containerd[1525]: time="2025-09-09T04:55:21.585038665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 04:55:21.589754 containerd[1525]: time="2025-09-09T04:55:21.589723157Z" level=info msg="CreateContainer within sandbox \"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 04:55:21.597707 containerd[1525]: time="2025-09-09T04:55:21.596103167Z" level=info msg="Container dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:21.603479 containerd[1525]: time="2025-09-09T04:55:21.603438339Z" level=info msg="CreateContainer within sandbox \"4ccedc1da524e4cee3c86b8aafefbdc96fef32c290e6c2187ec1a228d44fa0a2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961\"" Sep 9 04:55:21.604205 containerd[1525]: time="2025-09-09T04:55:21.604183373Z" level=info msg="StartContainer for \"dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961\"" Sep 9 04:55:21.605226 containerd[1525]: time="2025-09-09T04:55:21.605192378Z" level=info msg="connecting to shim dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961" address="unix:///run/containerd/s/eb95f0462cee37b9735889f6f045649b77376670f44bd95a7ee6877c0084e53a" protocol=ttrpc version=3 Sep 9 04:55:21.628867 systemd[1]: Started cri-containerd-dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961.scope - libcontainer container dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961. Sep 9 04:55:21.690328 containerd[1525]: time="2025-09-09T04:55:21.690288594Z" level=info msg="StartContainer for \"dcc895c5ffa26804b7467ed7093d5489c80f9308ade2e4af153a41b53f636961\" returns successfully" Sep 9 04:55:25.742769 containerd[1525]: time="2025-09-09T04:55:25.740584874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-pbfzq,Uid:4b6b7fb0-4687-4580-8783-7975c90a032c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:25.743352 containerd[1525]: time="2025-09-09T04:55:25.743308584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6pqbx,Uid:6253321b-a298-4910-b15c-2740cae64a22,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:25.743901 containerd[1525]: time="2025-09-09T04:55:25.743671319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxqsx,Uid:a901a0c3-eff5-4d2f-bac6-0e583e580868,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:25.743901 containerd[1525]: time="2025-09-09T04:55:25.743743602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-rn6pf,Uid:e2065b54-c63f-4dcd-bb07-0067793e6ac4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:55:25.894640 systemd-networkd[1442]: cali5a8b8806dea: Link UP Sep 9 04:55:25.894983 systemd-networkd[1442]: cali5a8b8806dea: Gained carrier Sep 9 04:55:25.910134 containerd[1525]: 2025-09-09 04:55:25.790 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:25.910134 containerd[1525]: 2025-09-09 04:55:25.805 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--6pqbx-eth0 goldmane-54d579b49d- calico-system 6253321b-a298-4910-b15c-2740cae64a22 844 0 2025-09-09 04:55:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-6pqbx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5a8b8806dea [] [] }} ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-" Sep 9 04:55:25.910134 containerd[1525]: 2025-09-09 04:55:25.806 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910134 containerd[1525]: 2025-09-09 04:55:25.840 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" HandleID="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Workload="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.840 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" HandleID="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Workload="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-6pqbx", "timestamp":"2025-09-09 04:55:25.840314132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.840 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.840 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.840 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.856 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" host="localhost" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.861 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.866 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.869 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.872 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:25.910328 containerd[1525]: 2025-09-09 04:55:25.872 [INFO][4295] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" host="localhost" Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.873 [INFO][4295] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37 Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.877 [INFO][4295] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" host="localhost" Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.883 [INFO][4295] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" host="localhost" Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.883 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" host="localhost" Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.883 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:25.910525 containerd[1525]: 2025-09-09 04:55:25.883 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" HandleID="k8s-pod-network.8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Workload="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910682 containerd[1525]: 2025-09-09 04:55:25.885 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--6pqbx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6253321b-a298-4910-b15c-2740cae64a22", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-6pqbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a8b8806dea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:25.910682 containerd[1525]: 2025-09-09 04:55:25.885 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910819 containerd[1525]: 2025-09-09 04:55:25.885 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a8b8806dea ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910819 containerd[1525]: 2025-09-09 04:55:25.895 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.910858 containerd[1525]: 2025-09-09 04:55:25.896 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--6pqbx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6253321b-a298-4910-b15c-2740cae64a22", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37", Pod:"goldmane-54d579b49d-6pqbx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a8b8806dea", MAC:"da:dd:22:fe:3a:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:25.910904 containerd[1525]: 2025-09-09 04:55:25.906 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" Namespace="calico-system" Pod="goldmane-54d579b49d-6pqbx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6pqbx-eth0" Sep 9 04:55:25.914897 kubelet[2684]: I0909 04:55:25.914844 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bb496ffd8-2wxd2" podStartSLOduration=6.095509704 podStartE2EDuration="8.914827293s" podCreationTimestamp="2025-09-09 04:55:17 +0000 UTC" firstStartedPulling="2025-09-09 04:55:18.766326144 +0000 UTC m=+31.122553660" lastFinishedPulling="2025-09-09 04:55:21.585643733 +0000 UTC m=+33.941871249" observedRunningTime="2025-09-09 04:55:21.891891367 +0000 UTC m=+34.248118883" watchObservedRunningTime="2025-09-09 04:55:25.914827293 +0000 UTC m=+38.271054769" Sep 9 04:55:25.928746 containerd[1525]: time="2025-09-09T04:55:25.928706972Z" level=info msg="connecting to shim 8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37" address="unix:///run/containerd/s/6fdc45c3e820c6852984567e79838168e757f3b16705cf44712dafdd9a28ca9f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:25.951845 systemd[1]: Started cri-containerd-8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37.scope - libcontainer container 8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37. Sep 9 04:55:25.964544 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:25.997062 systemd-networkd[1442]: cali1de466530bf: Link UP Sep 9 04:55:25.997390 systemd-networkd[1442]: cali1de466530bf: Gained carrier Sep 9 04:55:26.027708 containerd[1525]: 2025-09-09 04:55:25.785 [INFO][4232] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.027708 containerd[1525]: 2025-09-09 04:55:25.802 [INFO][4232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0 calico-apiserver-54fbdb58df- calico-apiserver 4b6b7fb0-4687-4580-8783-7975c90a032c 848 0 2025-09-09 04:55:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54fbdb58df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54fbdb58df-pbfzq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1de466530bf [] [] }} ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-" Sep 9 04:55:26.027708 containerd[1525]: 2025-09-09 04:55:25.802 [INFO][4232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.027708 containerd[1525]: 2025-09-09 04:55:25.841 [INFO][4290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" HandleID="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Workload="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.841 [INFO][4290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" HandleID="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Workload="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005109e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54fbdb58df-pbfzq", "timestamp":"2025-09-09 04:55:25.841178566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.841 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.885 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.885 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.957 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" host="localhost" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.962 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.967 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.969 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.971 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.027928 containerd[1525]: 2025-09-09 04:55:25.971 [INFO][4290] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" host="localhost" Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.973 [INFO][4290] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6 Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.979 [INFO][4290] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" host="localhost" Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4290] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" host="localhost" Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" host="localhost" Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.028141 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" HandleID="k8s-pod-network.b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Workload="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.028297 containerd[1525]: 2025-09-09 04:55:25.989 [INFO][4232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0", GenerateName:"calico-apiserver-54fbdb58df-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b6b7fb0-4687-4580-8783-7975c90a032c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fbdb58df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54fbdb58df-pbfzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de466530bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.028391 containerd[1525]: 2025-09-09 04:55:25.992 [INFO][4232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.028391 containerd[1525]: 2025-09-09 04:55:25.992 [INFO][4232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de466530bf ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.028391 containerd[1525]: 2025-09-09 04:55:26.002 [INFO][4232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.028457 containerd[1525]: 2025-09-09 04:55:26.006 [INFO][4232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0", GenerateName:"calico-apiserver-54fbdb58df-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b6b7fb0-4687-4580-8783-7975c90a032c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fbdb58df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6", Pod:"calico-apiserver-54fbdb58df-pbfzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de466530bf", MAC:"2e:f4:70:43:3b:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.028506 containerd[1525]: 2025-09-09 04:55:26.020 [INFO][4232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-pbfzq" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--pbfzq-eth0" Sep 9 04:55:26.038424 containerd[1525]: time="2025-09-09T04:55:26.038345548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6pqbx,Uid:6253321b-a298-4910-b15c-2740cae64a22,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37\"" Sep 9 04:55:26.051529 containerd[1525]: time="2025-09-09T04:55:26.051123649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 04:55:26.054666 containerd[1525]: time="2025-09-09T04:55:26.054470661Z" level=info msg="connecting to shim b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6" address="unix:///run/containerd/s/7c5b2fa7c6a4c656a0633a648bb278e3cdfde37a7d41a2a3a4f1594f61f326a4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.088949 systemd-networkd[1442]: cali452658fafe6: Link UP Sep 9 04:55:26.089469 systemd-networkd[1442]: cali452658fafe6: Gained carrier Sep 9 04:55:26.091826 systemd[1]: Started cri-containerd-b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6.scope - libcontainer container b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6. Sep 9 04:55:26.107820 containerd[1525]: 2025-09-09 04:55:25.782 [INFO][4240] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.107820 containerd[1525]: 2025-09-09 04:55:25.803 [INFO][4240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kxqsx-eth0 csi-node-driver- calico-system a901a0c3-eff5-4d2f-bac6-0e583e580868 724 0 2025-09-09 04:55:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kxqsx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali452658fafe6 [] [] }} ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-" Sep 9 04:55:26.107820 containerd[1525]: 2025-09-09 04:55:25.803 [INFO][4240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.107820 containerd[1525]: 2025-09-09 04:55:25.845 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" HandleID="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Workload="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:25.845 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" HandleID="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Workload="localhost-k8s-csi--node--driver--kxqsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kxqsx", "timestamp":"2025-09-09 04:55:25.845560183 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:25.845 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:25.987 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.059 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" host="localhost" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.063 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.068 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.069 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.071 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.108028 containerd[1525]: 2025-09-09 04:55:26.072 [INFO][4289] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" host="localhost" Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.073 [INFO][4289] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.076 [INFO][4289] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" host="localhost" Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4289] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" host="localhost" Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" host="localhost" Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.108218 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" HandleID="k8s-pod-network.ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Workload="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.108326 containerd[1525]: 2025-09-09 04:55:26.085 [INFO][4240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kxqsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a901a0c3-eff5-4d2f-bac6-0e583e580868", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kxqsx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali452658fafe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.108371 containerd[1525]: 2025-09-09 04:55:26.085 [INFO][4240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.108371 containerd[1525]: 2025-09-09 04:55:26.085 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali452658fafe6 ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.108371 containerd[1525]: 2025-09-09 04:55:26.090 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.108425 containerd[1525]: 2025-09-09 04:55:26.090 [INFO][4240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kxqsx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a901a0c3-eff5-4d2f-bac6-0e583e580868", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c", Pod:"csi-node-driver-kxqsx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali452658fafe6", MAC:"f2:7c:4e:2c:0c:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.108468 containerd[1525]: 2025-09-09 04:55:26.101 [INFO][4240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" Namespace="calico-system" Pod="csi-node-driver-kxqsx" WorkloadEndpoint="localhost-k8s-csi--node--driver--kxqsx-eth0" Sep 9 04:55:26.112899 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:26.124944 containerd[1525]: time="2025-09-09T04:55:26.124904383Z" level=info msg="connecting to shim ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c" address="unix:///run/containerd/s/8d063e2bd794ab9eed4324961933d4d38cd4b3089d634118029320674d2894e4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.136716 containerd[1525]: time="2025-09-09T04:55:26.136661484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-pbfzq,Uid:4b6b7fb0-4687-4580-8783-7975c90a032c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6\"" Sep 9 04:55:26.152831 systemd[1]: Started cri-containerd-ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c.scope - libcontainer container ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c. Sep 9 04:55:26.163759 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:26.182355 containerd[1525]: time="2025-09-09T04:55:26.182316754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kxqsx,Uid:a901a0c3-eff5-4d2f-bac6-0e583e580868,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c\"" Sep 9 04:55:26.195332 systemd-networkd[1442]: calid35e2942f19: Link UP Sep 9 04:55:26.195763 systemd-networkd[1442]: calid35e2942f19: Gained carrier Sep 9 04:55:26.209065 containerd[1525]: 2025-09-09 04:55:25.825 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.209065 containerd[1525]: 2025-09-09 04:55:25.860 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0 calico-apiserver-54fbdb58df- calico-apiserver e2065b54-c63f-4dcd-bb07-0067793e6ac4 845 0 2025-09-09 04:55:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54fbdb58df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54fbdb58df-rn6pf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid35e2942f19 [] [] }} ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-" Sep 9 04:55:26.209065 containerd[1525]: 2025-09-09 04:55:25.861 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209065 containerd[1525]: 2025-09-09 04:55:25.897 [INFO][4316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" HandleID="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Workload="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:25.898 [INFO][4316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" HandleID="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Workload="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001185a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54fbdb58df-rn6pf", "timestamp":"2025-09-09 04:55:25.897887131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:25.898 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.082 [INFO][4316] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.158 [INFO][4316] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" host="localhost" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.164 [INFO][4316] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.172 [INFO][4316] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.173 [INFO][4316] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.175 [INFO][4316] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.209283 containerd[1525]: 2025-09-09 04:55:26.176 [INFO][4316] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" host="localhost" Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.178 [INFO][4316] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50 Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.183 [INFO][4316] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" host="localhost" Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.190 [INFO][4316] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" host="localhost" Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.190 [INFO][4316] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" host="localhost" Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.190 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.209470 containerd[1525]: 2025-09-09 04:55:26.190 [INFO][4316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" HandleID="k8s-pod-network.c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Workload="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209575 containerd[1525]: 2025-09-09 04:55:26.192 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0", GenerateName:"calico-apiserver-54fbdb58df-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2065b54-c63f-4dcd-bb07-0067793e6ac4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fbdb58df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54fbdb58df-rn6pf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid35e2942f19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.209624 containerd[1525]: 2025-09-09 04:55:26.192 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209624 containerd[1525]: 2025-09-09 04:55:26.192 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid35e2942f19 ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209624 containerd[1525]: 2025-09-09 04:55:26.196 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.209679 containerd[1525]: 2025-09-09 04:55:26.196 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0", GenerateName:"calico-apiserver-54fbdb58df-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2065b54-c63f-4dcd-bb07-0067793e6ac4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54fbdb58df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50", Pod:"calico-apiserver-54fbdb58df-rn6pf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid35e2942f19", MAC:"46:3a:70:a9:7d:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.209744 containerd[1525]: 2025-09-09 04:55:26.206 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" Namespace="calico-apiserver" Pod="calico-apiserver-54fbdb58df-rn6pf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54fbdb58df--rn6pf-eth0" Sep 9 04:55:26.225445 containerd[1525]: time="2025-09-09T04:55:26.225405284Z" level=info msg="connecting to shim c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50" address="unix:///run/containerd/s/eb93f85d69580bafe48adfbea687a30a062673ed7bc76189ddee0b58f99d8a10" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.253868 systemd[1]: Started cri-containerd-c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50.scope - libcontainer container c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50. Sep 9 04:55:26.275926 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:26.327978 containerd[1525]: time="2025-09-09T04:55:26.327937705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54fbdb58df-rn6pf,Uid:e2065b54-c63f-4dcd-bb07-0067793e6ac4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50\"" Sep 9 04:55:26.386212 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). Sep 9 04:55:26.436608 sshd[4559]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:26.438057 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:26.441721 systemd-logind[1503]: New session 8 of user core. Sep 9 04:55:26.447809 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 04:55:26.600382 sshd[4562]: Connection closed by 10.0.0.1 port 37652 Sep 9 04:55:26.600310 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:26.603710 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:37652.service: Deactivated successfully. Sep 9 04:55:26.606413 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 04:55:26.607158 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Sep 9 04:55:26.608422 systemd-logind[1503]: Removed session 8. Sep 9 04:55:26.739705 kubelet[2684]: E0909 04:55:26.739333 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:26.739705 kubelet[2684]: E0909 04:55:26.739527 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:26.740510 containerd[1525]: time="2025-09-09T04:55:26.739886859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvr5,Uid:fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:26.740510 containerd[1525]: time="2025-09-09T04:55:26.740036825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfx6j,Uid:0ebb790d-cd70-4281-9a1e-e3b05d5d8f12,Namespace:kube-system,Attempt:0,}" Sep 9 04:55:26.860123 systemd-networkd[1442]: calib9c6129b75c: Link UP Sep 9 04:55:26.860278 systemd-networkd[1442]: calib9c6129b75c: Gained carrier Sep 9 04:55:26.878458 containerd[1525]: 2025-09-09 04:55:26.778 [INFO][4586] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.878458 containerd[1525]: 2025-09-09 04:55:26.793 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0 coredns-674b8bbfcf- kube-system fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f 846 0 2025-09-09 04:54:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8wvr5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9c6129b75c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-" Sep 9 04:55:26.878458 containerd[1525]: 2025-09-09 04:55:26.793 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.878458 containerd[1525]: 2025-09-09 04:55:26.814 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" HandleID="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Workload="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.814 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" HandleID="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Workload="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8wvr5", "timestamp":"2025-09-09 04:55:26.814431022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.814 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.814 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.814 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.823 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" host="localhost" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.827 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.833 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.837 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.840 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.879022 containerd[1525]: 2025-09-09 04:55:26.840 [INFO][4613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" host="localhost" Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.844 [INFO][4613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57 Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.848 [INFO][4613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" host="localhost" Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" host="localhost" Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" host="localhost" Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.879789 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" HandleID="k8s-pod-network.b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Workload="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.879905 containerd[1525]: 2025-09-09 04:55:26.855 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8wvr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9c6129b75c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.879974 containerd[1525]: 2025-09-09 04:55:26.856 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.879974 containerd[1525]: 2025-09-09 04:55:26.856 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c6129b75c ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.879974 containerd[1525]: 2025-09-09 04:55:26.857 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.880057 containerd[1525]: 2025-09-09 04:55:26.858 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57", Pod:"coredns-674b8bbfcf-8wvr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9c6129b75c", MAC:"7e:06:1c:0f:37:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.880057 containerd[1525]: 2025-09-09 04:55:26.874 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvr5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8wvr5-eth0" Sep 9 04:55:26.901549 containerd[1525]: time="2025-09-09T04:55:26.901510957Z" level=info msg="connecting to shim b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57" address="unix:///run/containerd/s/8c3ddce5cc0850d7fae1592888a224fa949bdc48be27cc679c354a6c7e4e0e60" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:26.928286 systemd[1]: Started cri-containerd-b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57.scope - libcontainer container b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57. Sep 9 04:55:26.942867 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:26.958680 systemd-networkd[1442]: calid2c1cb216ee: Link UP Sep 9 04:55:26.960479 systemd-networkd[1442]: calid2c1cb216ee: Gained carrier Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.773 [INFO][4577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.788 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0 coredns-674b8bbfcf- kube-system 0ebb790d-cd70-4281-9a1e-e3b05d5d8f12 847 0 2025-09-09 04:54:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gfx6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2c1cb216ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.788 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.815 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" HandleID="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Workload="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.815 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" HandleID="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Workload="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gfx6j", "timestamp":"2025-09-09 04:55:26.81539434 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.815 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.853 [INFO][4607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.924 [INFO][4607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.929 [INFO][4607] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.933 [INFO][4607] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.937 [INFO][4607] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.939 [INFO][4607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.939 [INFO][4607] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.941 [INFO][4607] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.944 [INFO][4607] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.950 [INFO][4607] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.950 [INFO][4607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" host="localhost" Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.950 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:26.980748 containerd[1525]: 2025-09-09 04:55:26.950 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" HandleID="k8s-pod-network.bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Workload="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.954 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0ebb790d-cd70-4281-9a1e-e3b05d5d8f12", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gfx6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2c1cb216ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.955 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.955 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2c1cb216ee ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.962 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.962 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0ebb790d-cd70-4281-9a1e-e3b05d5d8f12", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa", Pod:"coredns-674b8bbfcf-gfx6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2c1cb216ee", MAC:"fa:87:c0:48:36:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:26.981610 containerd[1525]: 2025-09-09 04:55:26.974 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfx6j" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gfx6j-eth0" Sep 9 04:55:26.984338 containerd[1525]: time="2025-09-09T04:55:26.984272323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvr5,Uid:fd0d0e5e-8b8c-4bfe-8a6f-2809b358103f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57\"" Sep 9 04:55:26.984993 kubelet[2684]: E0909 04:55:26.984968 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:26.998949 containerd[1525]: time="2025-09-09T04:55:26.998913737Z" level=info msg="connecting to shim bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa" address="unix:///run/containerd/s/45148c4de632edcb2f866676659c4449427bb3a5608b571df96d6ddf70c88d7d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:27.003110 containerd[1525]: time="2025-09-09T04:55:27.002145543Z" level=info msg="CreateContainer within sandbox \"b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:55:27.009518 containerd[1525]: time="2025-09-09T04:55:27.009484863Z" level=info msg="Container 911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:27.015590 containerd[1525]: time="2025-09-09T04:55:27.015552255Z" level=info msg="CreateContainer within sandbox \"b395d86cf0ae6e60c9b347f789f5e1f442047abdbb027a61aff475cbb7e2af57\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd\"" Sep 9 04:55:27.016100 containerd[1525]: time="2025-09-09T04:55:27.016074675Z" level=info msg="StartContainer for \"911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd\"" Sep 9 04:55:27.016929 containerd[1525]: time="2025-09-09T04:55:27.016904587Z" level=info msg="connecting to shim 911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd" address="unix:///run/containerd/s/8c3ddce5cc0850d7fae1592888a224fa949bdc48be27cc679c354a6c7e4e0e60" protocol=ttrpc version=3 Sep 9 04:55:27.021838 systemd[1]: Started cri-containerd-bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa.scope - libcontainer container bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa. Sep 9 04:55:27.045948 systemd[1]: Started cri-containerd-911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd.scope - libcontainer container 911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd. Sep 9 04:55:27.051506 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:27.081018 containerd[1525]: time="2025-09-09T04:55:27.080798429Z" level=info msg="StartContainer for \"911aa2eb7d3fc36c92181b6d1ec93c56aa4c4004208f692f3b4279567d7f37dd\" returns successfully" Sep 9 04:55:27.088157 containerd[1525]: time="2025-09-09T04:55:27.088112148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfx6j,Uid:0ebb790d-cd70-4281-9a1e-e3b05d5d8f12,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa\"" Sep 9 04:55:27.089127 kubelet[2684]: E0909 04:55:27.088916 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:27.094252 containerd[1525]: time="2025-09-09T04:55:27.094202781Z" level=info msg="CreateContainer within sandbox \"bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:55:27.104721 containerd[1525]: time="2025-09-09T04:55:27.104490414Z" level=info msg="Container 2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:27.113944 containerd[1525]: time="2025-09-09T04:55:27.113849212Z" level=info msg="CreateContainer within sandbox \"bdc94a4bd044060d17401f308357553901358236977f753bcf4fc2ce3e4656aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f\"" Sep 9 04:55:27.117469 containerd[1525]: time="2025-09-09T04:55:27.116847086Z" level=info msg="StartContainer for \"2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f\"" Sep 9 04:55:27.118501 containerd[1525]: time="2025-09-09T04:55:27.117835924Z" level=info msg="connecting to shim 2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f" address="unix:///run/containerd/s/45148c4de632edcb2f866676659c4449427bb3a5608b571df96d6ddf70c88d7d" protocol=ttrpc version=3 Sep 9 04:55:27.146879 systemd[1]: Started cri-containerd-2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f.scope - libcontainer container 2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f. Sep 9 04:55:27.188884 containerd[1525]: time="2025-09-09T04:55:27.188845118Z" level=info msg="StartContainer for \"2565c50bd3da21e69994b364b5658035a392e4469ded6983d55feb2a2cead89f\" returns successfully" Sep 9 04:55:27.345003 systemd-networkd[1442]: cali452658fafe6: Gained IPv6LL Sep 9 04:55:27.406887 systemd-networkd[1442]: cali5a8b8806dea: Gained IPv6LL Sep 9 04:55:27.471803 systemd-networkd[1442]: cali1de466530bf: Gained IPv6LL Sep 9 04:55:27.747958 containerd[1525]: time="2025-09-09T04:55:27.747854680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4f87864-7qvh4,Uid:021b6fc0-a415-4ddc-be6a-74700eea851d,Namespace:calico-system,Attempt:0,}" Sep 9 04:55:27.790852 systemd-networkd[1442]: calid35e2942f19: Gained IPv6LL Sep 9 04:55:27.872511 systemd-networkd[1442]: cali348ffdd1e90: Link UP Sep 9 04:55:27.873094 systemd-networkd[1442]: cali348ffdd1e90: Gained carrier Sep 9 04:55:27.889669 kubelet[2684]: E0909 04:55:27.889251 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.781 [INFO][4830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.802 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0 calico-kube-controllers-78f4f87864- calico-system 021b6fc0-a415-4ddc-be6a-74700eea851d 843 0 2025-09-09 04:55:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f4f87864 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78f4f87864-7qvh4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali348ffdd1e90 [] [] }} ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.802 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.830 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" HandleID="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Workload="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.831 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" HandleID="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Workload="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003234d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78f4f87864-7qvh4", "timestamp":"2025-09-09 04:55:27.830858572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.831 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.831 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.831 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.841 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.846 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.850 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.852 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.854 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.854 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.855 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3 Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.859 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.866 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.866 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" host="localhost" Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.866 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:55:27.891800 containerd[1525]: 2025-09-09 04:55:27.866 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" HandleID="k8s-pod-network.6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Workload="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.868 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0", GenerateName:"calico-kube-controllers-78f4f87864-", Namespace:"calico-system", SelfLink:"", UID:"021b6fc0-a415-4ddc-be6a-74700eea851d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4f87864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78f4f87864-7qvh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348ffdd1e90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.868 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.868 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali348ffdd1e90 ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.873 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.873 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0", GenerateName:"calico-kube-controllers-78f4f87864-", Namespace:"calico-system", SelfLink:"", UID:"021b6fc0-a415-4ddc-be6a-74700eea851d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f4f87864", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3", Pod:"calico-kube-controllers-78f4f87864-7qvh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali348ffdd1e90", MAC:"ba:ef:7d:25:24:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:55:27.892457 containerd[1525]: 2025-09-09 04:55:27.883 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" Namespace="calico-system" Pod="calico-kube-controllers-78f4f87864-7qvh4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f4f87864--7qvh4-eth0" Sep 9 04:55:27.893823 kubelet[2684]: E0909 04:55:27.892806 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:27.959415 kubelet[2684]: I0909 04:55:27.959355 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8wvr5" podStartSLOduration=35.959307081 podStartE2EDuration="35.959307081s" podCreationTimestamp="2025-09-09 04:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:55:27.956875068 +0000 UTC m=+40.313102624" watchObservedRunningTime="2025-09-09 04:55:27.959307081 +0000 UTC m=+40.315534597" Sep 9 04:55:27.988647 kubelet[2684]: I0909 04:55:27.988591 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gfx6j" podStartSLOduration=35.98857428 podStartE2EDuration="35.98857428s" podCreationTimestamp="2025-09-09 04:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:55:27.971962645 +0000 UTC m=+40.328190201" watchObservedRunningTime="2025-09-09 04:55:27.98857428 +0000 UTC m=+40.344801756" Sep 9 04:55:27.999513 containerd[1525]: time="2025-09-09T04:55:27.999186605Z" level=info msg="connecting to shim 6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3" address="unix:///run/containerd/s/756faf8d65ee7e744074a90490a59184f6573bb6e100c7dd8e5e9a5c3adf3967" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:55:28.034852 systemd[1]: Started cri-containerd-6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3.scope - libcontainer container 6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3. Sep 9 04:55:28.061009 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 04:55:28.081476 containerd[1525]: time="2025-09-09T04:55:28.081439234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f4f87864-7qvh4,Uid:021b6fc0-a415-4ddc-be6a-74700eea851d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3\"" Sep 9 04:55:28.152432 containerd[1525]: time="2025-09-09T04:55:28.152385119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:28.161500 containerd[1525]: time="2025-09-09T04:55:28.153074344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 04:55:28.161500 containerd[1525]: time="2025-09-09T04:55:28.155415351Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:28.161605 containerd[1525]: time="2025-09-09T04:55:28.159580747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.108422936s" Sep 9 04:55:28.161651 containerd[1525]: time="2025-09-09T04:55:28.161603102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 04:55:28.162264 containerd[1525]: time="2025-09-09T04:55:28.162226005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:28.163064 containerd[1525]: time="2025-09-09T04:55:28.163028155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:55:28.166619 containerd[1525]: time="2025-09-09T04:55:28.166589888Z" level=info msg="CreateContainer within sandbox \"8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 04:55:28.196088 containerd[1525]: time="2025-09-09T04:55:28.196034426Z" level=info msg="Container 64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:28.201917 containerd[1525]: time="2025-09-09T04:55:28.201875403Z" level=info msg="CreateContainer within sandbox \"8ce4ccbae3e6f51ad1169511e58b7c44295664908f6e98cfd98ea1a178b11b37\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\"" Sep 9 04:55:28.202734 containerd[1525]: time="2025-09-09T04:55:28.202713075Z" level=info msg="StartContainer for \"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\"" Sep 9 04:55:28.203784 containerd[1525]: time="2025-09-09T04:55:28.203757994Z" level=info msg="connecting to shim 64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a" address="unix:///run/containerd/s/6fdc45c3e820c6852984567e79838168e757f3b16705cf44712dafdd9a28ca9f" protocol=ttrpc version=3 Sep 9 04:55:28.221849 systemd[1]: Started cri-containerd-64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a.scope - libcontainer container 64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a. Sep 9 04:55:28.256133 containerd[1525]: time="2025-09-09T04:55:28.255836175Z" level=info msg="StartContainer for \"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\" returns successfully" Sep 9 04:55:28.622876 systemd-networkd[1442]: calib9c6129b75c: Gained IPv6LL Sep 9 04:55:28.753404 systemd-networkd[1442]: calid2c1cb216ee: Gained IPv6LL Sep 9 04:55:28.899332 kubelet[2684]: E0909 04:55:28.899190 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:28.899332 kubelet[2684]: E0909 04:55:28.899218 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:28.912723 kubelet[2684]: I0909 04:55:28.912580 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6pqbx" podStartSLOduration=20.800789435 podStartE2EDuration="22.912562657s" podCreationTimestamp="2025-09-09 04:55:06 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.050657031 +0000 UTC m=+38.406884547" lastFinishedPulling="2025-09-09 04:55:28.162430253 +0000 UTC m=+40.518657769" observedRunningTime="2025-09-09 04:55:28.912162962 +0000 UTC m=+41.268390478" watchObservedRunningTime="2025-09-09 04:55:28.912562657 +0000 UTC m=+41.268790173" Sep 9 04:55:29.582845 systemd-networkd[1442]: cali348ffdd1e90: Gained IPv6LL Sep 9 04:55:29.901425 kubelet[2684]: I0909 04:55:29.901315 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:29.902394 kubelet[2684]: E0909 04:55:29.902030 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:29.902394 kubelet[2684]: E0909 04:55:29.902229 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:30.093385 containerd[1525]: time="2025-09-09T04:55:30.093334359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:30.103442 containerd[1525]: time="2025-09-09T04:55:30.103405117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 04:55:30.104511 containerd[1525]: time="2025-09-09T04:55:30.104472675Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:30.107338 containerd[1525]: time="2025-09-09T04:55:30.107285335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:30.107990 containerd[1525]: time="2025-09-09T04:55:30.107955679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.944889482s" Sep 9 04:55:30.108025 containerd[1525]: time="2025-09-09T04:55:30.107987920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:55:30.109443 containerd[1525]: time="2025-09-09T04:55:30.109391010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 04:55:30.113635 containerd[1525]: time="2025-09-09T04:55:30.113414353Z" level=info msg="CreateContainer within sandbox \"b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:30.122630 containerd[1525]: time="2025-09-09T04:55:30.121847493Z" level=info msg="Container 23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:30.129116 containerd[1525]: time="2025-09-09T04:55:30.129063150Z" level=info msg="CreateContainer within sandbox \"b8c40ec27cf4e3d76177d9517fff40b80d5747e90e1a41e4eb35a1eb4255c7d6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd\"" Sep 9 04:55:30.129906 containerd[1525]: time="2025-09-09T04:55:30.129877379Z" level=info msg="StartContainer for \"23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd\"" Sep 9 04:55:30.130909 containerd[1525]: time="2025-09-09T04:55:30.130882174Z" level=info msg="connecting to shim 23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd" address="unix:///run/containerd/s/7c5b2fa7c6a4c656a0633a648bb278e3cdfde37a7d41a2a3a4f1594f61f326a4" protocol=ttrpc version=3 Sep 9 04:55:30.155849 systemd[1]: Started cri-containerd-23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd.scope - libcontainer container 23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd. Sep 9 04:55:30.190726 containerd[1525]: time="2025-09-09T04:55:30.190657821Z" level=info msg="StartContainer for \"23583df72b392cc3830374fe01629787c31355cd2b3511e1e92392b76b134cdd\" returns successfully" Sep 9 04:55:30.919078 kubelet[2684]: I0909 04:55:30.919016 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54fbdb58df-pbfzq" podStartSLOduration=26.948113637 podStartE2EDuration="30.918998573s" podCreationTimestamp="2025-09-09 04:55:00 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.137918733 +0000 UTC m=+38.494146249" lastFinishedPulling="2025-09-09 04:55:30.108803669 +0000 UTC m=+42.465031185" observedRunningTime="2025-09-09 04:55:30.917993817 +0000 UTC m=+43.274221333" watchObservedRunningTime="2025-09-09 04:55:30.918998573 +0000 UTC m=+43.275226129" Sep 9 04:55:31.161239 containerd[1525]: time="2025-09-09T04:55:31.161187786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:31.162153 containerd[1525]: time="2025-09-09T04:55:31.161874130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 04:55:31.166881 containerd[1525]: time="2025-09-09T04:55:31.166792221Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:31.170368 containerd[1525]: time="2025-09-09T04:55:31.170286302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:31.172648 containerd[1525]: time="2025-09-09T04:55:31.172532341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.063108649s" Sep 9 04:55:31.172648 containerd[1525]: time="2025-09-09T04:55:31.172562862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 04:55:31.173695 containerd[1525]: time="2025-09-09T04:55:31.173669060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:55:31.183247 containerd[1525]: time="2025-09-09T04:55:31.183211992Z" level=info msg="CreateContainer within sandbox \"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 04:55:31.199740 containerd[1525]: time="2025-09-09T04:55:31.199266951Z" level=info msg="Container db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:31.210450 containerd[1525]: time="2025-09-09T04:55:31.210275054Z" level=info msg="CreateContainer within sandbox \"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122\"" Sep 9 04:55:31.212210 containerd[1525]: time="2025-09-09T04:55:31.211616781Z" level=info msg="StartContainer for \"db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122\"" Sep 9 04:55:31.215250 containerd[1525]: time="2025-09-09T04:55:31.215158064Z" level=info msg="connecting to shim db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122" address="unix:///run/containerd/s/8d063e2bd794ab9eed4324961933d4d38cd4b3089d634118029320674d2894e4" protocol=ttrpc version=3 Sep 9 04:55:31.241874 systemd[1]: Started cri-containerd-db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122.scope - libcontainer container db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122. Sep 9 04:55:31.279541 containerd[1525]: time="2025-09-09T04:55:31.279502824Z" level=info msg="StartContainer for \"db4a5a42e95c4fff91a60d091ef90e6668b9a7b1e576eff245f90c6939876122\" returns successfully" Sep 9 04:55:31.422290 containerd[1525]: time="2025-09-09T04:55:31.422179149Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:31.424245 containerd[1525]: time="2025-09-09T04:55:31.424206820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 04:55:31.428884 containerd[1525]: time="2025-09-09T04:55:31.426901674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 253.191412ms" Sep 9 04:55:31.428884 containerd[1525]: time="2025-09-09T04:55:31.426945035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:55:31.431086 containerd[1525]: time="2025-09-09T04:55:31.431036458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 04:55:31.443736 containerd[1525]: time="2025-09-09T04:55:31.443357606Z" level=info msg="CreateContainer within sandbox \"c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:55:31.450793 containerd[1525]: time="2025-09-09T04:55:31.450755304Z" level=info msg="Container 3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:31.463624 containerd[1525]: time="2025-09-09T04:55:31.463458866Z" level=info msg="CreateContainer within sandbox \"c8d5642ea3fb9084b613f71c91f6f9a6d92aaf55147dc0a025af9f44e63b2d50\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187\"" Sep 9 04:55:31.464214 containerd[1525]: time="2025-09-09T04:55:31.464179571Z" level=info msg="StartContainer for \"3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187\"" Sep 9 04:55:31.465698 containerd[1525]: time="2025-09-09T04:55:31.465666463Z" level=info msg="connecting to shim 3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187" address="unix:///run/containerd/s/eb93f85d69580bafe48adfbea687a30a062673ed7bc76189ddee0b58f99d8a10" protocol=ttrpc version=3 Sep 9 04:55:31.495862 systemd[1]: Started cri-containerd-3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187.scope - libcontainer container 3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187. Sep 9 04:55:31.545435 containerd[1525]: time="2025-09-09T04:55:31.544947062Z" level=info msg="StartContainer for \"3068c735c4fcc18a03f27ad42c840a1883377fe8bb379358037b95fb3f6c0187\" returns successfully" Sep 9 04:55:31.615585 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:40198.service - OpenSSH per-connection server daemon (10.0.0.1:40198). Sep 9 04:55:31.700863 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:31.702240 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:31.707676 systemd-logind[1503]: New session 9 of user core. Sep 9 04:55:31.714879 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 04:55:32.008506 kubelet[2684]: I0909 04:55:32.008354 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54fbdb58df-rn6pf" podStartSLOduration=26.907858048 podStartE2EDuration="32.008337545s" podCreationTimestamp="2025-09-09 04:55:00 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.32910443 +0000 UTC m=+38.685331946" lastFinishedPulling="2025-09-09 04:55:31.429583927 +0000 UTC m=+43.785811443" observedRunningTime="2025-09-09 04:55:32.007409554 +0000 UTC m=+44.363637070" watchObservedRunningTime="2025-09-09 04:55:32.008337545 +0000 UTC m=+44.364565061" Sep 9 04:55:32.093918 sshd[5146]: Connection closed by 10.0.0.1 port 40198 Sep 9 04:55:32.095491 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:32.099962 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:40198.service: Deactivated successfully. Sep 9 04:55:32.102082 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 04:55:32.103491 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Sep 9 04:55:32.105483 systemd-logind[1503]: Removed session 9. Sep 9 04:55:32.972307 kubelet[2684]: I0909 04:55:32.972271 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:33.258151 kubelet[2684]: I0909 04:55:33.257629 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:33.490241 containerd[1525]: time="2025-09-09T04:55:33.490159435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f\" id:\"d95c38dc5750c92430db8af6e0c1e2bed3fc7c97fbcd367d04424c6962d77cbc\" pid:5232 exit_status:1 exited_at:{seconds:1757393733 nanos:477124759}" Sep 9 04:55:33.609799 containerd[1525]: time="2025-09-09T04:55:33.609746629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:33.610534 containerd[1525]: time="2025-09-09T04:55:33.610514375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 04:55:33.611764 containerd[1525]: time="2025-09-09T04:55:33.611726015Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:33.614716 containerd[1525]: time="2025-09-09T04:55:33.614639993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:33.615812 containerd[1525]: time="2025-09-09T04:55:33.615785071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.184702772s" Sep 9 04:55:33.615864 containerd[1525]: time="2025-09-09T04:55:33.615814872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 04:55:33.618252 containerd[1525]: time="2025-09-09T04:55:33.618217552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 04:55:33.628694 containerd[1525]: time="2025-09-09T04:55:33.628653061Z" level=info msg="CreateContainer within sandbox \"6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 04:55:33.638541 containerd[1525]: time="2025-09-09T04:55:33.638493029Z" level=info msg="Container eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:33.647719 containerd[1525]: time="2025-09-09T04:55:33.646981233Z" level=info msg="CreateContainer within sandbox \"6e551988f6cbce78b6b5355c3daa374d9cb6262b5b8e866f187eb975c42b9fe3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219\"" Sep 9 04:55:33.649104 containerd[1525]: time="2025-09-09T04:55:33.649064783Z" level=info msg="StartContainer for \"eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219\"" Sep 9 04:55:33.650307 containerd[1525]: time="2025-09-09T04:55:33.650243942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4923b5d86ea576bb50512b8ea3d399f46ecc163505708ea8c236a904ec3b478f\" id:\"9b50c76445438ef1ae65438773d430bc3f580b4846fa38324d2de2f5d970d769\" pid:5258 exit_status:1 exited_at:{seconds:1757393733 nanos:649898010}" Sep 9 04:55:33.652705 containerd[1525]: time="2025-09-09T04:55:33.652651702Z" level=info msg="connecting to shim eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219" address="unix:///run/containerd/s/756faf8d65ee7e744074a90490a59184f6573bb6e100c7dd8e5e9a5c3adf3967" protocol=ttrpc version=3 Sep 9 04:55:33.677873 systemd[1]: Started cri-containerd-eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219.scope - libcontainer container eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219. Sep 9 04:55:33.728382 containerd[1525]: time="2025-09-09T04:55:33.728343391Z" level=info msg="StartContainer for \"eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219\" returns successfully" Sep 9 04:55:33.992328 kubelet[2684]: I0909 04:55:33.992020 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f4f87864-7qvh4" podStartSLOduration=22.467883186999998 podStartE2EDuration="27.992001677s" podCreationTimestamp="2025-09-09 04:55:06 +0000 UTC" firstStartedPulling="2025-09-09 04:55:28.093201752 +0000 UTC m=+40.449429228" lastFinishedPulling="2025-09-09 04:55:33.617320202 +0000 UTC m=+45.973547718" observedRunningTime="2025-09-09 04:55:33.991575103 +0000 UTC m=+46.347802619" watchObservedRunningTime="2025-09-09 04:55:33.992001677 +0000 UTC m=+46.348229193" Sep 9 04:55:34.041243 containerd[1525]: time="2025-09-09T04:55:34.041182975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219\" id:\"bec7d182d81452a0ee0424f9d4545c7410d070ef569580112043c2b3917d5748\" pid:5329 exited_at:{seconds:1757393734 nanos:34411833}" Sep 9 04:55:34.860025 containerd[1525]: time="2025-09-09T04:55:34.859976243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:34.860695 containerd[1525]: time="2025-09-09T04:55:34.860660266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 04:55:34.861346 containerd[1525]: time="2025-09-09T04:55:34.861322287Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:34.864115 containerd[1525]: time="2025-09-09T04:55:34.864080138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:34.865469 containerd[1525]: time="2025-09-09T04:55:34.865435022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.247182749s" Sep 9 04:55:34.865469 containerd[1525]: time="2025-09-09T04:55:34.865466623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 04:55:34.871598 containerd[1525]: time="2025-09-09T04:55:34.871563103Z" level=info msg="CreateContainer within sandbox \"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 04:55:34.878832 containerd[1525]: time="2025-09-09T04:55:34.878799940Z" level=info msg="Container 4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:55:34.887721 containerd[1525]: time="2025-09-09T04:55:34.887665950Z" level=info msg="CreateContainer within sandbox \"ef8e544867d152f94bcde3b20cc8a0ee444195fa2e470000642b5d5e5383a14c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d\"" Sep 9 04:55:34.888149 containerd[1525]: time="2025-09-09T04:55:34.888115765Z" level=info msg="StartContainer for \"4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d\"" Sep 9 04:55:34.889517 containerd[1525]: time="2025-09-09T04:55:34.889487370Z" level=info msg="connecting to shim 4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d" address="unix:///run/containerd/s/8d063e2bd794ab9eed4324961933d4d38cd4b3089d634118029320674d2894e4" protocol=ttrpc version=3 Sep 9 04:55:34.907871 systemd[1]: Started cri-containerd-4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d.scope - libcontainer container 4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d. Sep 9 04:55:34.949419 containerd[1525]: time="2025-09-09T04:55:34.949376532Z" level=info msg="StartContainer for \"4dc6904550fb58afdbdce528932ec480aa8ca8ef756ad94923693ad775e44c6d\" returns successfully" Sep 9 04:55:34.995720 kubelet[2684]: I0909 04:55:34.993580 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kxqsx" podStartSLOduration=20.310986181 podStartE2EDuration="28.99356222s" podCreationTimestamp="2025-09-09 04:55:06 +0000 UTC" firstStartedPulling="2025-09-09 04:55:26.184084263 +0000 UTC m=+38.540311779" lastFinishedPulling="2025-09-09 04:55:34.866660302 +0000 UTC m=+47.222887818" observedRunningTime="2025-09-09 04:55:34.992087772 +0000 UTC m=+47.348315288" watchObservedRunningTime="2025-09-09 04:55:34.99356222 +0000 UTC m=+47.349789736" Sep 9 04:55:35.053915 kubelet[2684]: I0909 04:55:35.053843 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:35.054291 kubelet[2684]: E0909 04:55:35.054257 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:35.587176 systemd-networkd[1442]: vxlan.calico: Link UP Sep 9 04:55:35.587184 systemd-networkd[1442]: vxlan.calico: Gained carrier Sep 9 04:55:35.824345 kubelet[2684]: I0909 04:55:35.824304 2684 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 04:55:35.827094 kubelet[2684]: I0909 04:55:35.827067 2684 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 04:55:35.984083 kubelet[2684]: E0909 04:55:35.983808 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 04:55:36.878883 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Sep 9 04:55:37.107402 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:40212.service - OpenSSH per-connection server daemon (10.0.0.1:40212). Sep 9 04:55:37.165394 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 40212 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:37.167648 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:37.174028 systemd-logind[1503]: New session 10 of user core. Sep 9 04:55:37.185897 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 04:55:37.381777 sshd[5553]: Connection closed by 10.0.0.1 port 40212 Sep 9 04:55:37.382749 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:37.392773 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:40212.service: Deactivated successfully. Sep 9 04:55:37.394385 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 04:55:37.395103 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Sep 9 04:55:37.397586 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:40218.service - OpenSSH per-connection server daemon (10.0.0.1:40218). Sep 9 04:55:37.398077 systemd-logind[1503]: Removed session 10. Sep 9 04:55:37.457267 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 40218 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:37.458792 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:37.465774 systemd-logind[1503]: New session 11 of user core. Sep 9 04:55:37.475903 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 04:55:37.680061 sshd[5572]: Connection closed by 10.0.0.1 port 40218 Sep 9 04:55:37.681216 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:37.692371 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:40218.service: Deactivated successfully. Sep 9 04:55:37.695217 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 04:55:37.696958 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Sep 9 04:55:37.700662 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:40232.service - OpenSSH per-connection server daemon (10.0.0.1:40232). Sep 9 04:55:37.704099 systemd-logind[1503]: Removed session 11. Sep 9 04:55:37.771930 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 40232 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:37.773392 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:37.778563 systemd-logind[1503]: New session 12 of user core. Sep 9 04:55:37.794847 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 04:55:37.975287 sshd[5587]: Connection closed by 10.0.0.1 port 40232 Sep 9 04:55:37.975892 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:37.979599 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:40232.service: Deactivated successfully. Sep 9 04:55:37.981414 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 04:55:37.982079 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Sep 9 04:55:37.983530 systemd-logind[1503]: Removed session 12. Sep 9 04:55:38.608553 kubelet[2684]: I0909 04:55:38.608487 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:38.732506 containerd[1525]: time="2025-09-09T04:55:38.732455227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\" id:\"2cc1a6eb854da0263502d4e002f7c6ce81301d0872f5fa66b46b5f034717f5e5\" pid:5614 exited_at:{seconds:1757393738 nanos:732089016}" Sep 9 04:55:38.829021 containerd[1525]: time="2025-09-09T04:55:38.828906138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\" id:\"b1c2f290e8d375ef7a3d228fa3309f3dead9fe2ffbc2cd38a55ae7f64d78b9d6\" pid:5639 exited_at:{seconds:1757393738 nanos:828573448}" Sep 9 04:55:42.987193 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:36816.service - OpenSSH per-connection server daemon (10.0.0.1:36816). Sep 9 04:55:43.052893 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 36816 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:43.054487 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:43.058938 systemd-logind[1503]: New session 13 of user core. Sep 9 04:55:43.077902 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 04:55:43.262769 sshd[5666]: Connection closed by 10.0.0.1 port 36816 Sep 9 04:55:43.263308 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:43.275121 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:36816.service: Deactivated successfully. Sep 9 04:55:43.277574 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 04:55:43.278868 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Sep 9 04:55:43.280976 systemd-logind[1503]: Removed session 13. Sep 9 04:55:43.283167 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Sep 9 04:55:43.342681 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:43.344208 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:43.349011 systemd-logind[1503]: New session 14 of user core. Sep 9 04:55:43.352835 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 04:55:43.557098 sshd[5684]: Connection closed by 10.0.0.1 port 36818 Sep 9 04:55:43.557317 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:43.570035 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:36818.service: Deactivated successfully. Sep 9 04:55:43.573211 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 04:55:43.575046 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Sep 9 04:55:43.578457 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:36824.service - OpenSSH per-connection server daemon (10.0.0.1:36824). Sep 9 04:55:43.579726 systemd-logind[1503]: Removed session 14. Sep 9 04:55:43.632088 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 36824 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:43.633387 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:43.637705 systemd-logind[1503]: New session 15 of user core. Sep 9 04:55:43.641830 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 04:55:44.268257 sshd[5699]: Connection closed by 10.0.0.1 port 36824 Sep 9 04:55:44.269046 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:44.279644 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:36824.service: Deactivated successfully. Sep 9 04:55:44.281662 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 04:55:44.283049 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Sep 9 04:55:44.291079 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:36830.service - OpenSSH per-connection server daemon (10.0.0.1:36830). Sep 9 04:55:44.292558 systemd-logind[1503]: Removed session 15. Sep 9 04:55:44.351130 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 36830 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:44.352586 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:44.357325 systemd-logind[1503]: New session 16 of user core. Sep 9 04:55:44.364860 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 04:55:44.705501 sshd[5721]: Connection closed by 10.0.0.1 port 36830 Sep 9 04:55:44.706167 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:44.720932 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:36830.service: Deactivated successfully. Sep 9 04:55:44.724300 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 04:55:44.725836 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Sep 9 04:55:44.729470 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:36838.service - OpenSSH per-connection server daemon (10.0.0.1:36838). Sep 9 04:55:44.730292 systemd-logind[1503]: Removed session 16. Sep 9 04:55:44.789728 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 36838 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:44.791598 kubelet[2684]: I0909 04:55:44.791179 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:55:44.791425 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:44.795557 systemd-logind[1503]: New session 17 of user core. Sep 9 04:55:44.801870 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 04:55:44.937497 sshd[5736]: Connection closed by 10.0.0.1 port 36838 Sep 9 04:55:44.937855 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:44.941481 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:36838.service: Deactivated successfully. Sep 9 04:55:44.943576 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 04:55:44.945073 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Sep 9 04:55:44.946267 systemd-logind[1503]: Removed session 17. Sep 9 04:55:49.949589 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:35582.service - OpenSSH per-connection server daemon (10.0.0.1:35582). Sep 9 04:55:50.004539 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 35582 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:50.005946 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:50.010432 systemd-logind[1503]: New session 18 of user core. Sep 9 04:55:50.015932 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 04:55:50.142063 sshd[5767]: Connection closed by 10.0.0.1 port 35582 Sep 9 04:55:50.142386 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:50.146072 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:35582.service: Deactivated successfully. Sep 9 04:55:50.149324 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 04:55:50.150071 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Sep 9 04:55:50.151155 systemd-logind[1503]: Removed session 18. Sep 9 04:55:51.398902 containerd[1525]: time="2025-09-09T04:55:51.398855588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eecc2a3339c2b2bebfa4dc5ef12f3b3d971ea3f0628e93be8cde3b49c4ae9219\" id:\"491e6190de14dcbfe7eefae76280fb92fd3a14833d7c6869a7d680770a7d3285\" pid:5791 exited_at:{seconds:1757393751 nanos:398617422}" Sep 9 04:55:55.157990 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:35598.service - OpenSSH per-connection server daemon (10.0.0.1:35598). Sep 9 04:55:55.226951 sshd[5804]: Accepted publickey for core from 10.0.0.1 port 35598 ssh2: RSA SHA256:y2XmME+qZ8Vxpxr1aV4RZrrEEzQsCrVNEgcY8K5ZHGs Sep 9 04:55:55.228147 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:55.232371 systemd-logind[1503]: New session 19 of user core. Sep 9 04:55:55.240868 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 04:55:55.384732 sshd[5807]: Connection closed by 10.0.0.1 port 35598 Sep 9 04:55:55.385055 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:55.387842 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:35598.service: Deactivated successfully. Sep 9 04:55:55.389606 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 04:55:55.390856 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Sep 9 04:55:55.392340 systemd-logind[1503]: Removed session 19. Sep 9 04:55:56.667406 containerd[1525]: time="2025-09-09T04:55:56.667359455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64ecbc259d211366fcfe68bbd8c250156f4c21c313ab908fb819ce0faca13b9a\" id:\"4a8c6fe9b82bf00f474e1f50596e18e96123f5cbd290573a8c6f73c1d871c542\" pid:5840 exited_at:{seconds:1757393756 nanos:667052220}"