Oct 13 04:58:55.395873 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 13 04:58:55.395899 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Oct 13 03:30:16 -00 2025 Oct 13 04:58:55.395908 kernel: KASLR enabled Oct 13 04:58:55.395914 kernel: efi: EFI v2.7 by EDK II Oct 13 04:58:55.395920 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 13 04:58:55.395926 kernel: random: crng init done Oct 13 04:58:55.395933 kernel: secureboot: Secure boot disabled Oct 13 04:58:55.395940 kernel: ACPI: Early table checksum verification disabled Oct 13 04:58:55.395947 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 13 04:58:55.395954 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 13 04:58:55.395960 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.395966 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.395972 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.395979 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396011 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396019 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396026 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396032 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396039 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 04:58:55.396046 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 13 04:58:55.396052 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 04:58:55.396059 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:58:55.396081 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 13 04:58:55.396090 kernel: Zone ranges: Oct 13 04:58:55.396097 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:58:55.396104 kernel: DMA32 empty Oct 13 04:58:55.396111 kernel: Normal empty Oct 13 04:58:55.396117 kernel: Device empty Oct 13 04:58:55.396123 kernel: Movable zone start for each node Oct 13 04:58:55.396130 kernel: Early memory node ranges Oct 13 04:58:55.396137 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 13 04:58:55.396144 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 13 04:58:55.396150 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 13 04:58:55.396157 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 13 04:58:55.396166 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 13 04:58:55.396173 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 13 04:58:55.396179 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 13 04:58:55.396186 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 13 04:58:55.396192 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 13 04:58:55.396199 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 13 04:58:55.396210 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 13 04:58:55.396217 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 13 04:58:55.396225 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 13 04:58:55.396232 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 13 04:58:55.396239 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 13 04:58:55.396246 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 13 04:58:55.396266 kernel: psci: probing for conduit method from ACPI. Oct 13 04:58:55.396274 kernel: psci: PSCIv1.1 detected in firmware. Oct 13 04:58:55.396283 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 04:58:55.396290 kernel: psci: Trusted OS migration not required Oct 13 04:58:55.396297 kernel: psci: SMC Calling Convention v1.1 Oct 13 04:58:55.396304 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 13 04:58:55.396311 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 04:58:55.396318 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 04:58:55.396326 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 13 04:58:55.396333 kernel: Detected PIPT I-cache on CPU0 Oct 13 04:58:55.396340 kernel: CPU features: detected: GIC system register CPU interface Oct 13 04:58:55.396347 kernel: CPU features: detected: Spectre-v4 Oct 13 04:58:55.396354 kernel: CPU features: detected: Spectre-BHB Oct 13 04:58:55.396368 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 04:58:55.396375 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 04:58:55.396383 kernel: CPU features: detected: ARM erratum 1418040 Oct 13 04:58:55.396390 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 04:58:55.396397 kernel: alternatives: applying boot alternatives Oct 13 04:58:55.396405 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:58:55.396412 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 04:58:55.396419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 04:58:55.396426 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 04:58:55.396433 kernel: Fallback order for Node 0: 0 Oct 13 04:58:55.396442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 13 04:58:55.396449 kernel: Policy zone: DMA Oct 13 04:58:55.396456 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 04:58:55.396463 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 13 04:58:55.396470 kernel: software IO TLB: area num 4. Oct 13 04:58:55.396477 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 13 04:58:55.396484 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 13 04:58:55.396491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 04:58:55.396498 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 04:58:55.396520 kernel: rcu: RCU event tracing is enabled. Oct 13 04:58:55.396531 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 04:58:55.396540 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 04:58:55.396547 kernel: Tracing variant of Tasks RCU enabled. Oct 13 04:58:55.396555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 04:58:55.396562 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 04:58:55.396569 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 04:58:55.396576 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 04:58:55.396603 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 04:58:55.396612 kernel: GICv3: 256 SPIs implemented Oct 13 04:58:55.396620 kernel: GICv3: 0 Extended SPIs implemented Oct 13 04:58:55.396627 kernel: Root IRQ handler: gic_handle_irq Oct 13 04:58:55.396634 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 13 04:58:55.396643 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 13 04:58:55.396650 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 13 04:58:55.396658 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 13 04:58:55.396665 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 13 04:58:55.396672 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 13 04:58:55.396679 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 13 04:58:55.396686 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 13 04:58:55.396694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 04:58:55.396701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:58:55.396708 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 13 04:58:55.396715 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 13 04:58:55.396723 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 13 04:58:55.396731 kernel: arm-pv: using stolen time PV Oct 13 04:58:55.396738 kernel: Console: colour dummy device 80x25 Oct 13 04:58:55.396746 kernel: ACPI: Core revision 20240827 Oct 13 04:58:55.396754 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 13 04:58:55.396761 kernel: pid_max: default: 32768 minimum: 301 Oct 13 04:58:55.396769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 04:58:55.396776 kernel: landlock: Up and running. Oct 13 04:58:55.396790 kernel: SELinux: Initializing. Oct 13 04:58:55.396797 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:58:55.396805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 04:58:55.396812 kernel: rcu: Hierarchical SRCU implementation. Oct 13 04:58:55.396820 kernel: rcu: Max phase no-delay instances is 400. Oct 13 04:58:55.396828 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 04:58:55.396835 kernel: Remapping and enabling EFI services. Oct 13 04:58:55.396857 kernel: smp: Bringing up secondary CPUs ... Oct 13 04:58:55.396872 kernel: Detected PIPT I-cache on CPU1 Oct 13 04:58:55.396880 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 13 04:58:55.396889 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 13 04:58:55.396896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:58:55.396904 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 13 04:58:55.396912 kernel: Detected PIPT I-cache on CPU2 Oct 13 04:58:55.396934 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 13 04:58:55.396945 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 13 04:58:55.396953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:58:55.396961 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 13 04:58:55.396969 kernel: Detected PIPT I-cache on CPU3 Oct 13 04:58:55.396977 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 13 04:58:55.396985 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 13 04:58:55.396994 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 13 04:58:55.397002 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 13 04:58:55.397009 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 04:58:55.397017 kernel: SMP: Total of 4 processors activated. Oct 13 04:58:55.397025 kernel: CPU: All CPU(s) started at EL1 Oct 13 04:58:55.397033 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 04:58:55.397040 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 04:58:55.397048 kernel: CPU features: detected: Common not Private translations Oct 13 04:58:55.397057 kernel: CPU features: detected: CRC32 instructions Oct 13 04:58:55.397065 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 13 04:58:55.397073 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 04:58:55.397081 kernel: CPU features: detected: LSE atomic instructions Oct 13 04:58:55.397088 kernel: CPU features: detected: Privileged Access Never Oct 13 04:58:55.397096 kernel: CPU features: detected: RAS Extension Support Oct 13 04:58:55.397104 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 04:58:55.397113 kernel: alternatives: applying system-wide alternatives Oct 13 04:58:55.397121 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 13 04:58:55.397133 kernel: Memory: 2450400K/2572288K available (11200K kernel code, 2456K rwdata, 9080K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 13 04:58:55.397142 kernel: devtmpfs: initialized Oct 13 04:58:55.397150 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 04:58:55.397158 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 04:58:55.397166 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 04:58:55.397175 kernel: 0 pages in range for non-PLT usage Oct 13 04:58:55.397183 kernel: 515040 pages in range for PLT usage Oct 13 04:58:55.397190 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 04:58:55.397198 kernel: SMBIOS 3.0.0 present. Oct 13 04:58:55.397206 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 13 04:58:55.397213 kernel: DMI: Memory slots populated: 1/1 Oct 13 04:58:55.397221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 04:58:55.397229 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 04:58:55.397258 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 04:58:55.397269 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 04:58:55.397277 kernel: audit: initializing netlink subsys (disabled) Oct 13 04:58:55.397284 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Oct 13 04:58:55.397292 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 04:58:55.397300 kernel: cpuidle: using governor menu Oct 13 04:58:55.397308 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 04:58:55.397333 kernel: ASID allocator initialised with 32768 entries Oct 13 04:58:55.397342 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 04:58:55.397350 kernel: Serial: AMBA PL011 UART driver Oct 13 04:58:55.397358 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 04:58:55.397370 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 04:58:55.397378 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 04:58:55.397386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 04:58:55.397396 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 04:58:55.397404 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 04:58:55.397412 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 04:58:55.397433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 04:58:55.397441 kernel: ACPI: Added _OSI(Module Device) Oct 13 04:58:55.397449 kernel: ACPI: Added _OSI(Processor Device) Oct 13 04:58:55.397456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 04:58:55.397466 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 04:58:55.397473 kernel: ACPI: Interpreter enabled Oct 13 04:58:55.397481 kernel: ACPI: Using GIC for interrupt routing Oct 13 04:58:55.397489 kernel: ACPI: MCFG table detected, 1 entries Oct 13 04:58:55.397497 kernel: ACPI: CPU0 has been hot-added Oct 13 04:58:55.397504 kernel: ACPI: CPU1 has been hot-added Oct 13 04:58:55.397512 kernel: ACPI: CPU2 has been hot-added Oct 13 04:58:55.397520 kernel: ACPI: CPU3 has been hot-added Oct 13 04:58:55.397529 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 13 04:58:55.397537 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 04:58:55.397545 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 04:58:55.397722 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 04:58:55.397840 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 04:58:55.397926 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 04:58:55.398012 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 13 04:58:55.398132 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 13 04:58:55.398145 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 13 04:58:55.398153 kernel: PCI host bridge to bus 0000:00 Oct 13 04:58:55.398246 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 13 04:58:55.398351 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 13 04:58:55.398442 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 13 04:58:55.398516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 04:58:55.398659 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 13 04:58:55.398757 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 04:58:55.398847 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 13 04:58:55.398932 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 13 04:58:55.399015 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 13 04:58:55.399140 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 13 04:58:55.399231 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 13 04:58:55.399350 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 13 04:58:55.399438 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 13 04:58:55.399547 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 13 04:58:55.399654 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 13 04:58:55.399667 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 13 04:58:55.399676 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 13 04:58:55.399683 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 13 04:58:55.399692 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 13 04:58:55.399702 kernel: iommu: Default domain type: Translated Oct 13 04:58:55.399710 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 04:58:55.399718 kernel: efivars: Registered efivars operations Oct 13 04:58:55.399726 kernel: vgaarb: loaded Oct 13 04:58:55.399734 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 04:58:55.399741 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 04:58:55.399750 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 04:58:55.399757 kernel: pnp: PnP ACPI init Oct 13 04:58:55.399856 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 13 04:58:55.399868 kernel: pnp: PnP ACPI: found 1 devices Oct 13 04:58:55.399895 kernel: NET: Registered PF_INET protocol family Oct 13 04:58:55.399906 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 04:58:55.399914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 04:58:55.399923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 04:58:55.399934 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 04:58:55.399942 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 04:58:55.399950 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 04:58:55.399973 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:58:55.399982 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 04:58:55.399990 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 04:58:55.399998 kernel: PCI: CLS 0 bytes, default 64 Oct 13 04:58:55.400008 kernel: kvm [1]: HYP mode not available Oct 13 04:58:55.400016 kernel: Initialise system trusted keyrings Oct 13 04:58:55.400024 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 04:58:55.400032 kernel: Key type asymmetric registered Oct 13 04:58:55.400040 kernel: Asymmetric key parser 'x509' registered Oct 13 04:58:55.400047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 04:58:55.400055 kernel: io scheduler mq-deadline registered Oct 13 04:58:55.400064 kernel: io scheduler kyber registered Oct 13 04:58:55.400072 kernel: io scheduler bfq registered Oct 13 04:58:55.400080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 13 04:58:55.400088 kernel: ACPI: button: Power Button [PWRB] Oct 13 04:58:55.400097 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 13 04:58:55.400193 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 13 04:58:55.400204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 04:58:55.400213 kernel: thunder_xcv, ver 1.0 Oct 13 04:58:55.400221 kernel: thunder_bgx, ver 1.0 Oct 13 04:58:55.400229 kernel: nicpf, ver 1.0 Oct 13 04:58:55.400237 kernel: nicvf, ver 1.0 Oct 13 04:58:55.400346 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 04:58:55.400465 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T04:58:54 UTC (1760331534) Oct 13 04:58:55.400478 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 04:58:55.400491 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 04:58:55.400498 kernel: watchdog: NMI not fully supported Oct 13 04:58:55.400524 kernel: watchdog: Hard watchdog permanently disabled Oct 13 04:58:55.400532 kernel: NET: Registered PF_INET6 protocol family Oct 13 04:58:55.400540 kernel: Segment Routing with IPv6 Oct 13 04:58:55.400548 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 04:58:55.400556 kernel: NET: Registered PF_PACKET protocol family Oct 13 04:58:55.400566 kernel: Key type dns_resolver registered Oct 13 04:58:55.400574 kernel: registered taskstats version 1 Oct 13 04:58:55.400582 kernel: Loading compiled-in X.509 certificates Oct 13 04:58:55.400590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 0d5be6bcdaeaf26c55e47d87e2567b03196058e4' Oct 13 04:58:55.400598 kernel: Demotion targets for Node 0: null Oct 13 04:58:55.400606 kernel: Key type .fscrypt registered Oct 13 04:58:55.400614 kernel: Key type fscrypt-provisioning registered Oct 13 04:58:55.400624 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 04:58:55.400632 kernel: ima: Allocated hash algorithm: sha1 Oct 13 04:58:55.400640 kernel: ima: No architecture policies found Oct 13 04:58:55.400647 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 04:58:55.400655 kernel: clk: Disabling unused clocks Oct 13 04:58:55.400663 kernel: PM: genpd: Disabling unused power domains Oct 13 04:58:55.400671 kernel: Freeing unused kernel memory: 12992K Oct 13 04:58:55.400681 kernel: Run /init as init process Oct 13 04:58:55.400688 kernel: with arguments: Oct 13 04:58:55.400696 kernel: /init Oct 13 04:58:55.400704 kernel: with environment: Oct 13 04:58:55.400722 kernel: HOME=/ Oct 13 04:58:55.400734 kernel: TERM=linux Oct 13 04:58:55.400742 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 04:58:55.400870 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 13 04:58:55.400983 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 04:58:55.400996 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 04:58:55.401005 kernel: GPT:16515071 != 27000831 Oct 13 04:58:55.401012 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 04:58:55.401020 kernel: GPT:16515071 != 27000831 Oct 13 04:58:55.401028 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 04:58:55.401037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 04:58:55.401046 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401054 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401061 kernel: SCSI subsystem initialized Oct 13 04:58:55.401069 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401077 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 04:58:55.401085 kernel: device-mapper: uevent: version 1.0.3 Oct 13 04:58:55.401094 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 04:58:55.401102 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 04:58:55.401110 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401118 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401126 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401154 kernel: raid6: neonx8 gen() 15123 MB/s Oct 13 04:58:55.401163 kernel: raid6: neonx4 gen() 15086 MB/s Oct 13 04:58:55.401173 kernel: raid6: neonx2 gen() 12606 MB/s Oct 13 04:58:55.401181 kernel: raid6: neonx1 gen() 9953 MB/s Oct 13 04:58:55.401189 kernel: raid6: int64x8 gen() 6626 MB/s Oct 13 04:58:55.401197 kernel: raid6: int64x4 gen() 7045 MB/s Oct 13 04:58:55.401204 kernel: raid6: int64x2 gen() 5882 MB/s Oct 13 04:58:55.401227 kernel: raid6: int64x1 gen() 4835 MB/s Oct 13 04:58:55.401235 kernel: raid6: using algorithm neonx8 gen() 15123 MB/s Oct 13 04:58:55.401243 kernel: raid6: .... xor() 11468 MB/s, rmw enabled Oct 13 04:58:55.401264 kernel: raid6: using neon recovery algorithm Oct 13 04:58:55.401273 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401281 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401289 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401299 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401308 kernel: xor: measuring software checksum speed Oct 13 04:58:55.401316 kernel: 8regs : 17399 MB/sec Oct 13 04:58:55.401324 kernel: 32regs : 21670 MB/sec Oct 13 04:58:55.401335 kernel: arm64_neon : 25214 MB/sec Oct 13 04:58:55.401343 kernel: xor: using function: arm64_neon (25214 MB/sec) Oct 13 04:58:55.401351 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401358 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 04:58:55.401374 kernel: BTRFS: device fsid 976d1a25-6e06-4ce9-b674-96d83e61f95d devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (203) Oct 13 04:58:55.401382 kernel: BTRFS info (device dm-0): first mount of filesystem 976d1a25-6e06-4ce9-b674-96d83e61f95d Oct 13 04:58:55.401390 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:55.401401 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 04:58:55.401409 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 04:58:55.401417 kernel: Invalid ELF header magic: != \u007fELF Oct 13 04:58:55.401424 kernel: loop: module loaded Oct 13 04:58:55.401432 kernel: loop0: detected capacity change from 0 to 91456 Oct 13 04:58:55.401441 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 04:58:55.401450 systemd[1]: Successfully made /usr/ read-only. Oct 13 04:58:55.401463 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:58:55.401473 systemd[1]: Detected virtualization kvm. Oct 13 04:58:55.401481 systemd[1]: Detected architecture arm64. Oct 13 04:58:55.401490 systemd[1]: Running in initrd. Oct 13 04:58:55.401498 systemd[1]: No hostname configured, using default hostname. Oct 13 04:58:55.401507 systemd[1]: Hostname set to . Oct 13 04:58:55.401517 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 04:58:55.401526 systemd[1]: Queued start job for default target initrd.target. Oct 13 04:58:55.401547 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:58:55.401559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:58:55.401567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:58:55.401577 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 04:58:55.401593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:58:55.401604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 04:58:55.401628 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 04:58:55.401638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:58:55.401663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:58:55.401672 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:58:55.401681 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:58:55.401690 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:58:55.401699 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:58:55.401708 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:58:55.401716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:58:55.401727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:58:55.401736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 04:58:55.401744 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 04:58:55.401753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:58:55.401762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:58:55.401771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:58:55.401781 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:58:55.401792 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 04:58:55.401801 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 04:58:55.401810 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:58:55.401819 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 04:58:55.401829 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 04:58:55.401839 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 04:58:55.401849 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:58:55.401858 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:58:55.401867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:55.401876 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 04:58:55.401886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:58:55.401895 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 04:58:55.401905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 04:58:55.401914 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 04:58:55.401944 systemd-journald[343]: Collecting audit messages is disabled. Oct 13 04:58:55.401965 kernel: Bridge firewalling registered Oct 13 04:58:55.401974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:58:55.401983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:58:55.401993 systemd-journald[343]: Journal started Oct 13 04:58:55.402013 systemd-journald[343]: Runtime Journal (/run/log/journal/04ba903fa2d147d09603844744706635) is 6M, max 48.5M, 42.4M free. Oct 13 04:58:55.396550 systemd-modules-load[344]: Inserted module 'br_netfilter' Oct 13 04:58:55.404281 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:58:55.416517 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:55.418387 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 04:58:55.422409 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 04:58:55.423862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:58:55.426587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:58:55.442513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:58:55.445406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:58:55.448477 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 04:58:55.450370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:58:55.453378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:58:55.462020 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:58:55.464177 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 04:58:55.485421 systemd-resolved[373]: Positive Trust Anchors: Oct 13 04:58:55.485435 systemd-resolved[373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:58:55.485438 systemd-resolved[373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:58:55.485470 systemd-resolved[373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:58:55.496586 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1a81e36b39d22063d1d9b2ac3307af6d1e57cfd926c8fafd214fb74284e73d99 Oct 13 04:58:55.506753 systemd-resolved[373]: Defaulting to hostname 'linux'. Oct 13 04:58:55.507673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:58:55.508701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:58:55.561295 kernel: Loading iSCSI transport class v2.0-870. Oct 13 04:58:55.569276 kernel: iscsi: registered transport (tcp) Oct 13 04:58:55.583278 kernel: iscsi: registered transport (qla4xxx) Oct 13 04:58:55.583323 kernel: QLogic iSCSI HBA Driver Oct 13 04:58:55.603050 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:58:55.620120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:58:55.622164 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:58:55.666879 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 04:58:55.669079 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 04:58:55.670691 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 04:58:55.706946 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:58:55.710041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:58:55.738977 systemd-udevd[626]: Using default interface naming scheme 'v257'. Oct 13 04:58:55.746666 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:58:55.749414 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 04:58:55.772410 dracut-pre-trigger[698]: rd.md=0: removing MD RAID activation Oct 13 04:58:55.778570 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:58:55.781223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:58:55.796129 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:58:55.798169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:58:55.827367 systemd-networkd[744]: lo: Link UP Oct 13 04:58:55.828112 systemd-networkd[744]: lo: Gained carrier Oct 13 04:58:55.829197 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:58:55.830078 systemd[1]: Reached target network.target - Network. Oct 13 04:58:55.846322 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:58:55.848766 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 04:58:55.892512 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 04:58:55.918137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 04:58:55.927849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 04:58:55.935712 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:55.935726 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:58:55.936516 systemd-networkd[744]: eth0: Link UP Oct 13 04:58:55.936652 systemd-networkd[744]: eth0: Gained carrier Oct 13 04:58:55.936662 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:58:55.937465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 04:58:55.942348 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 04:58:55.943735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:55.943847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:55.945065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:55.955315 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 04:58:55.955896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:58:55.963333 disk-uuid[803]: Primary Header is updated. Oct 13 04:58:55.963333 disk-uuid[803]: Secondary Entries is updated. Oct 13 04:58:55.963333 disk-uuid[803]: Secondary Header is updated. Oct 13 04:58:55.972307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 04:58:55.978065 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:58:55.980331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:58:55.982049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:58:55.985408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 04:58:55.992417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:56.016606 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:58:56.995114 disk-uuid[806]: Warning: The kernel is still using the old partition table. Oct 13 04:58:56.995114 disk-uuid[806]: The new table will be used at the next reboot or after you Oct 13 04:58:56.995114 disk-uuid[806]: run partprobe(8) or kpartx(8) Oct 13 04:58:56.995114 disk-uuid[806]: The operation has completed successfully. Oct 13 04:58:57.000316 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 04:58:57.000430 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 04:58:57.004421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 04:58:57.045340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Oct 13 04:58:57.045397 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:57.045410 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:57.049267 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:58:57.049306 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:58:57.055280 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:57.055689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 04:58:57.058007 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 04:58:57.162964 ignition[851]: Ignition 2.22.0 Oct 13 04:58:57.162982 ignition[851]: Stage: fetch-offline Oct 13 04:58:57.163021 ignition[851]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:57.163031 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:57.163126 ignition[851]: parsed url from cmdline: "" Oct 13 04:58:57.163130 ignition[851]: no config URL provided Oct 13 04:58:57.163135 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 04:58:57.163144 ignition[851]: no config at "/usr/lib/ignition/user.ign" Oct 13 04:58:57.163182 ignition[851]: op(1): [started] loading QEMU firmware config module Oct 13 04:58:57.163186 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 04:58:57.169096 ignition[851]: op(1): [finished] loading QEMU firmware config module Oct 13 04:58:57.212955 ignition[851]: parsing config with SHA512: a2e7a8d6e9c835987074ecbd6246b1fdf4d4389303803f6c6447bf1530f7a6f3cb2c924d07e80aa450948649ffbd1ac58cb03bbd4af20b9b85192b2d6e375528 Oct 13 04:58:57.217710 unknown[851]: fetched base config from "system" Oct 13 04:58:57.217741 unknown[851]: fetched user config from "qemu" Oct 13 04:58:57.218110 ignition[851]: fetch-offline: fetch-offline passed Oct 13 04:58:57.218170 ignition[851]: Ignition finished successfully Oct 13 04:58:57.221068 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:58:57.222858 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 04:58:57.223844 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 04:58:57.263530 ignition[868]: Ignition 2.22.0 Oct 13 04:58:57.263545 ignition[868]: Stage: kargs Oct 13 04:58:57.263677 ignition[868]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:57.263685 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:57.264475 ignition[868]: kargs: kargs passed Oct 13 04:58:57.264521 ignition[868]: Ignition finished successfully Oct 13 04:58:57.269592 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 04:58:57.272412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 04:58:57.308873 ignition[876]: Ignition 2.22.0 Oct 13 04:58:57.308892 ignition[876]: Stage: disks Oct 13 04:58:57.309021 ignition[876]: no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:57.309030 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:57.309760 ignition[876]: disks: disks passed Oct 13 04:58:57.312530 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 04:58:57.309801 ignition[876]: Ignition finished successfully Oct 13 04:58:57.314028 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 04:58:57.315247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 04:58:57.317080 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:58:57.318492 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:58:57.320103 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:58:57.322767 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 04:58:57.364787 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 04:58:57.369595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 04:58:57.371806 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 04:58:57.440285 kernel: EXT4-fs (vda9): mounted filesystem a42694d5-feb9-4394-9ac1-a45818242d2d r/w with ordered data mode. Quota mode: none. Oct 13 04:58:57.440249 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 04:58:57.441297 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 04:58:57.443342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:58:57.444740 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 04:58:57.445582 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 04:58:57.445616 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 04:58:57.445639 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:58:57.459786 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 04:58:57.462008 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 04:58:57.466225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Oct 13 04:58:57.466264 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:57.466282 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:57.470273 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:58:57.470320 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:58:57.471423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:58:57.499392 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 04:58:57.503527 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Oct 13 04:58:57.507204 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 04:58:57.510825 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 04:58:57.584331 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 04:58:57.586543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 04:58:57.588041 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 04:58:57.610991 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 04:58:57.614280 kernel: BTRFS info (device vda6): last unmount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:57.624391 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 04:58:57.639621 ignition[1007]: INFO : Ignition 2.22.0 Oct 13 04:58:57.639621 ignition[1007]: INFO : Stage: mount Oct 13 04:58:57.640904 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:57.640904 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:57.640904 ignition[1007]: INFO : mount: mount passed Oct 13 04:58:57.640904 ignition[1007]: INFO : Ignition finished successfully Oct 13 04:58:57.642646 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 04:58:57.644971 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 04:58:57.672371 systemd-networkd[744]: eth0: Gained IPv6LL Oct 13 04:58:58.441915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 04:58:58.459805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Oct 13 04:58:58.459842 kernel: BTRFS info (device vda6): first mount of filesystem e9d5eae2-c289-4bda-a378-1699d81be8dc Oct 13 04:58:58.459855 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 04:58:58.463294 kernel: BTRFS info (device vda6): turning on async discard Oct 13 04:58:58.463329 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 04:58:58.464627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 04:58:58.496727 ignition[1036]: INFO : Ignition 2.22.0 Oct 13 04:58:58.496727 ignition[1036]: INFO : Stage: files Oct 13 04:58:58.498162 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:58.498162 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:58.498162 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Oct 13 04:58:58.501359 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 04:58:58.501359 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 04:58:58.501359 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 04:58:58.504999 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 04:58:58.504999 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 04:58:58.504999 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 04:58:58.504999 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 13 04:58:58.501861 unknown[1036]: wrote ssh authorized keys file for user: core Oct 13 04:58:58.584173 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 04:58:58.745873 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 04:58:58.745873 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:58:58.748722 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 04:58:58.758494 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:58:58.758494 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 04:58:58.758494 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:58:58.762835 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:58:58.762835 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:58:58.762835 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 13 04:58:59.182187 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 04:58:59.554588 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 04:58:59.554588 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 04:58:59.557506 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:58:59.560143 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 04:58:59.560143 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 04:58:59.560143 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 04:58:59.563816 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 04:58:59.563816 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 04:58:59.563816 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 04:58:59.563816 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 04:58:59.574598 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 04:58:59.578049 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 04:58:59.580469 ignition[1036]: INFO : files: files passed Oct 13 04:58:59.580469 ignition[1036]: INFO : Ignition finished successfully Oct 13 04:58:59.581137 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 04:58:59.583171 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 04:58:59.584741 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 04:58:59.598008 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 04:58:59.598634 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 04:58:59.600239 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 04:58:59.602365 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:59.602365 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:59.604633 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 04:58:59.605078 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:58:59.606893 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 04:58:59.608760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 04:58:59.643194 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 04:58:59.643359 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 04:58:59.644928 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 04:58:59.646185 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 04:58:59.647732 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 04:58:59.648519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 04:58:59.662603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:58:59.664662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 04:58:59.691056 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 04:58:59.691195 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:58:59.692844 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:58:59.694329 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 04:58:59.695834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 04:58:59.695952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 04:58:59.697858 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 04:58:59.699276 systemd[1]: Stopped target basic.target - Basic System. Oct 13 04:58:59.700511 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 04:58:59.701746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 04:58:59.703105 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 04:58:59.704536 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 04:58:59.705940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 04:58:59.707295 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 04:58:59.708798 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 04:58:59.710240 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 04:58:59.711573 systemd[1]: Stopped target swap.target - Swaps. Oct 13 04:58:59.712747 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 04:58:59.712864 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 04:58:59.714630 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:58:59.716148 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:58:59.717930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 04:58:59.721331 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:58:59.722329 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 04:58:59.722456 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 04:58:59.724788 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 04:58:59.724916 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 04:58:59.726433 systemd[1]: Stopped target paths.target - Path Units. Oct 13 04:58:59.727673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 04:58:59.731337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:58:59.732301 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 04:58:59.733967 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 04:58:59.735094 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 04:58:59.735183 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 04:58:59.736352 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 04:58:59.736436 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 04:58:59.737648 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 04:58:59.737765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 04:58:59.739085 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 04:58:59.739190 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 04:58:59.741099 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 04:58:59.742265 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 04:58:59.742402 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:58:59.744842 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 04:58:59.745934 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 04:58:59.746069 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:58:59.747610 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 04:58:59.747723 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:58:59.749158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 04:58:59.749280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 04:58:59.754558 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 04:58:59.758436 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 04:58:59.768591 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 04:58:59.773586 ignition[1092]: INFO : Ignition 2.22.0 Oct 13 04:58:59.773586 ignition[1092]: INFO : Stage: umount Oct 13 04:58:59.775037 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 04:58:59.775037 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 04:58:59.775037 ignition[1092]: INFO : umount: umount passed Oct 13 04:58:59.775037 ignition[1092]: INFO : Ignition finished successfully Oct 13 04:58:59.777659 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 04:58:59.777758 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 04:58:59.779614 systemd[1]: Stopped target network.target - Network. Oct 13 04:58:59.781391 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 04:58:59.781452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 04:58:59.782278 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 04:58:59.782327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 04:58:59.784176 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 04:58:59.784223 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 04:58:59.786272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 04:58:59.786316 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 04:58:59.789043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 04:58:59.792156 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 04:58:59.797519 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 04:58:59.797627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 04:58:59.802583 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 04:58:59.802699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 04:58:59.807550 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 04:58:59.807652 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 04:58:59.809504 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 04:58:59.811021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 04:58:59.811071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:58:59.812569 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 04:58:59.812615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 04:58:59.814688 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 04:58:59.815905 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 04:58:59.815961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 04:58:59.817415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 04:58:59.817458 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:58:59.818733 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 04:58:59.818770 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 04:58:59.820264 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:58:59.840537 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 04:58:59.840686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:58:59.843173 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 04:58:59.843246 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 04:58:59.845747 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 04:58:59.845785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:58:59.846596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 04:58:59.846644 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 04:58:59.848635 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 04:58:59.848681 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 04:58:59.850670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 04:58:59.850714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 04:58:59.853467 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 04:58:59.854380 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 04:58:59.854438 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:58:59.856113 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 04:58:59.856158 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:58:59.857723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 04:58:59.857763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:58:59.859698 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 04:58:59.859794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 04:58:59.862219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 04:58:59.862350 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 04:58:59.863429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 04:58:59.865325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 04:58:59.875269 systemd[1]: Switching root. Oct 13 04:58:59.914654 systemd-journald[343]: Journal stopped Oct 13 04:59:00.648753 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Oct 13 04:59:00.648806 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 04:59:00.648824 kernel: SELinux: policy capability open_perms=1 Oct 13 04:59:00.648834 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 04:59:00.648848 kernel: SELinux: policy capability always_check_network=0 Oct 13 04:59:00.648865 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 04:59:00.648875 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 04:59:00.648887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 04:59:00.648897 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 04:59:00.648908 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 04:59:00.648918 kernel: audit: type=1403 audit(1760331540.100:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 04:59:00.648929 systemd[1]: Successfully loaded SELinux policy in 57.684ms. Oct 13 04:59:00.648945 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.432ms. Oct 13 04:59:00.648957 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 04:59:00.648969 systemd[1]: Detected virtualization kvm. Oct 13 04:59:00.648979 systemd[1]: Detected architecture arm64. Oct 13 04:59:00.648991 systemd[1]: Detected first boot. Oct 13 04:59:00.649001 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 04:59:00.649012 zram_generator::config[1137]: No configuration found. Oct 13 04:59:00.649024 kernel: NET: Registered PF_VSOCK protocol family Oct 13 04:59:00.649034 systemd[1]: Populated /etc with preset unit settings. Oct 13 04:59:00.649045 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 04:59:00.649055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 04:59:00.649068 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 04:59:00.649079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 04:59:00.649090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 04:59:00.649100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 04:59:00.649111 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 04:59:00.649122 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 04:59:00.649133 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 04:59:00.649145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 04:59:00.649156 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 04:59:00.649167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 04:59:00.649178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 04:59:00.649188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 04:59:00.649199 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 04:59:00.649209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 04:59:00.649222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 04:59:00.649232 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 04:59:00.649247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 04:59:00.649269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 04:59:00.649281 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 04:59:00.649292 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 04:59:00.649304 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 04:59:00.649316 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 04:59:00.649326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 04:59:00.649342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 04:59:00.649355 systemd[1]: Reached target slices.target - Slice Units. Oct 13 04:59:00.649366 systemd[1]: Reached target swap.target - Swaps. Oct 13 04:59:00.649377 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 04:59:00.649389 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 04:59:00.649400 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 04:59:00.649410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 04:59:00.649421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 04:59:00.649432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 04:59:00.649443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 04:59:00.649453 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 04:59:00.649466 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 04:59:00.649476 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 04:59:00.649488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 04:59:00.649498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 04:59:00.649509 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 04:59:00.649521 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 04:59:00.649531 systemd[1]: Reached target machines.target - Containers. Oct 13 04:59:00.649543 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 04:59:00.649553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:59:00.649564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 04:59:00.649575 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 04:59:00.649586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:59:00.649596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:59:00.649607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:59:00.649619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 04:59:00.649630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:59:00.649642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 04:59:00.649653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 04:59:00.649663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 04:59:00.649674 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 04:59:00.649684 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 04:59:00.649697 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:59:00.649707 kernel: fuse: init (API version 7.41) Oct 13 04:59:00.649717 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 04:59:00.649728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 04:59:00.649738 kernel: ACPI: bus type drm_connector registered Oct 13 04:59:00.649748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 04:59:00.649759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 04:59:00.649772 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 04:59:00.649782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 04:59:00.649793 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 04:59:00.649803 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 04:59:00.649815 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 04:59:00.649826 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 04:59:00.649837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 04:59:00.649863 systemd-journald[1209]: Collecting audit messages is disabled. Oct 13 04:59:00.649888 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 04:59:00.649901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 04:59:00.649913 systemd-journald[1209]: Journal started Oct 13 04:59:00.649933 systemd-journald[1209]: Runtime Journal (/run/log/journal/04ba903fa2d147d09603844744706635) is 6M, max 48.5M, 42.4M free. Oct 13 04:59:00.448660 systemd[1]: Queued start job for default target multi-user.target. Oct 13 04:59:00.466090 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 04:59:00.466527 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 04:59:00.655272 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 04:59:00.656068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 04:59:00.657409 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 04:59:00.657570 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 04:59:00.658625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:59:00.658780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:59:00.659888 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:59:00.660052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:59:00.661155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:59:00.661354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:59:00.662439 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 04:59:00.662587 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 04:59:00.663622 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:59:00.663782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:59:00.664929 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 04:59:00.666152 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 04:59:00.667953 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 04:59:00.669482 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 04:59:00.681415 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 04:59:00.682580 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 04:59:00.684504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 04:59:00.686199 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 04:59:00.687109 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 04:59:00.687136 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 04:59:00.688821 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 04:59:00.689887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:59:00.697075 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 04:59:00.698826 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 04:59:00.699697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:59:00.700556 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 04:59:00.701514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:59:00.705412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 04:59:00.705985 systemd-journald[1209]: Time spent on flushing to /var/log/journal/04ba903fa2d147d09603844744706635 is 18.768ms for 882 entries. Oct 13 04:59:00.705985 systemd-journald[1209]: System Journal (/var/log/journal/04ba903fa2d147d09603844744706635) is 8M, max 163.5M, 155.5M free. Oct 13 04:59:00.733484 systemd-journald[1209]: Received client request to flush runtime journal. Oct 13 04:59:00.733535 kernel: loop1: detected capacity change from 0 to 207008 Oct 13 04:59:00.708155 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 04:59:00.711416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 04:59:00.712962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 04:59:00.714776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 04:59:00.716022 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 04:59:00.717336 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 04:59:00.722179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 04:59:00.724900 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 04:59:00.729369 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 04:59:00.743778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 04:59:00.748210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 04:59:00.752106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 04:59:00.756267 kernel: loop2: detected capacity change from 0 to 119344 Oct 13 04:59:00.754934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 04:59:00.766451 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 04:59:00.767762 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 04:59:00.777530 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Oct 13 04:59:00.777545 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Oct 13 04:59:00.781267 kernel: loop3: detected capacity change from 0 to 100624 Oct 13 04:59:00.783354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 04:59:00.794817 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 04:59:00.801288 kernel: loop4: detected capacity change from 0 to 207008 Oct 13 04:59:00.808278 kernel: loop5: detected capacity change from 0 to 119344 Oct 13 04:59:00.814278 kernel: loop6: detected capacity change from 0 to 100624 Oct 13 04:59:00.818968 (sd-merge)[1281]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 04:59:00.821988 (sd-merge)[1281]: Merged extensions into '/usr'. Oct 13 04:59:00.825627 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 04:59:00.825642 systemd[1]: Reloading... Oct 13 04:59:00.845473 systemd-resolved[1269]: Positive Trust Anchors: Oct 13 04:59:00.845490 systemd-resolved[1269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 04:59:00.845494 systemd-resolved[1269]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 04:59:00.845525 systemd-resolved[1269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 04:59:00.852595 systemd-resolved[1269]: Defaulting to hostname 'linux'. Oct 13 04:59:00.879281 zram_generator::config[1307]: No configuration found. Oct 13 04:59:01.009609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 04:59:01.009837 systemd[1]: Reloading finished in 183 ms. Oct 13 04:59:01.039658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 04:59:01.040778 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 04:59:01.043588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 04:59:01.054386 systemd[1]: Starting ensure-sysext.service... Oct 13 04:59:01.055963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 04:59:01.073069 systemd[1]: Reload requested from client PID 1344 ('systemctl') (unit ensure-sysext.service)... Oct 13 04:59:01.073086 systemd[1]: Reloading... Oct 13 04:59:01.076853 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 04:59:01.076888 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 04:59:01.077111 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 04:59:01.077373 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 04:59:01.078062 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 04:59:01.078309 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Oct 13 04:59:01.078369 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Oct 13 04:59:01.082151 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:59:01.082168 systemd-tmpfiles[1345]: Skipping /boot Oct 13 04:59:01.088664 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 04:59:01.088680 systemd-tmpfiles[1345]: Skipping /boot Oct 13 04:59:01.124283 zram_generator::config[1375]: No configuration found. Oct 13 04:59:01.248457 systemd[1]: Reloading finished in 175 ms. Oct 13 04:59:01.265704 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 04:59:01.290352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 04:59:01.297052 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:59:01.299084 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 04:59:01.310122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 04:59:01.312135 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 04:59:01.317603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 04:59:01.320311 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 04:59:01.324162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:59:01.327526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 04:59:01.330400 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 04:59:01.332791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 04:59:01.333813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:59:01.333948 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:59:01.339600 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 04:59:01.342548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:59:01.342791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:59:01.342887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:59:01.345937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 04:59:01.347615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 04:59:01.348980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 04:59:01.349085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 04:59:01.350321 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 04:59:01.358869 systemd[1]: Finished ensure-sysext.service. Oct 13 04:59:01.359154 systemd-udevd[1419]: Using default interface naming scheme 'v257'. Oct 13 04:59:01.365196 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 04:59:01.366575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 04:59:01.366788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 04:59:01.368299 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 04:59:01.368462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 04:59:01.370121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 04:59:01.372151 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 04:59:01.373809 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 04:59:01.376012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 04:59:01.376232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 04:59:01.377416 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 04:59:01.377562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 04:59:01.379382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 04:59:01.383626 augenrules[1451]: No rules Oct 13 04:59:01.384746 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:59:01.385072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:59:01.386084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 04:59:01.392426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 04:59:01.427212 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 04:59:01.428071 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 04:59:01.429330 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 04:59:01.479149 systemd-networkd[1466]: lo: Link UP Oct 13 04:59:01.479157 systemd-networkd[1466]: lo: Gained carrier Oct 13 04:59:01.483756 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 04:59:01.483758 systemd-networkd[1466]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:59:01.483769 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 04:59:01.484960 systemd[1]: Reached target network.target - Network. Oct 13 04:59:01.485592 systemd-networkd[1466]: eth0: Link UP Oct 13 04:59:01.485884 systemd-networkd[1466]: eth0: Gained carrier Oct 13 04:59:01.486192 systemd-networkd[1466]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:59:01.488008 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 04:59:01.490593 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 04:59:01.495528 systemd-networkd[1466]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 04:59:01.504768 systemd-networkd[1466]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 04:59:01.507128 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Oct 13 04:59:01.507924 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 04:59:01.507979 systemd-timesyncd[1441]: Initial clock synchronization to Mon 2025-10-13 04:59:01.855386 UTC. Oct 13 04:59:01.511073 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 04:59:01.528347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 04:59:01.534880 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 04:59:01.565091 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 04:59:01.602510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 04:59:01.627791 ldconfig[1413]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 04:59:01.633327 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 04:59:01.635915 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 04:59:01.648457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 04:59:01.653681 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 04:59:01.654786 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 04:59:01.655745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 04:59:01.656731 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 04:59:01.657828 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 04:59:01.658782 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 04:59:01.659791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 04:59:01.660748 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 04:59:01.660780 systemd[1]: Reached target paths.target - Path Units. Oct 13 04:59:01.661469 systemd[1]: Reached target timers.target - Timer Units. Oct 13 04:59:01.664351 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 04:59:01.666449 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 04:59:01.669215 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 04:59:01.670359 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 04:59:01.671320 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 04:59:01.677697 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 04:59:01.679002 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 04:59:01.680532 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 04:59:01.681452 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 04:59:01.682182 systemd[1]: Reached target basic.target - Basic System. Oct 13 04:59:01.682988 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:59:01.683019 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 04:59:01.683941 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 04:59:01.685745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 04:59:01.687413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 04:59:01.689172 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 04:59:01.690965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 04:59:01.691914 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 04:59:01.693411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 04:59:01.695103 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 04:59:01.695937 jq[1522]: false Oct 13 04:59:01.697168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 04:59:01.700363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 04:59:01.703933 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 04:59:01.704872 extend-filesystems[1523]: Found /dev/vda6 Oct 13 04:59:01.709086 extend-filesystems[1523]: Found /dev/vda9 Oct 13 04:59:01.705176 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 04:59:01.713385 extend-filesystems[1523]: Checking size of /dev/vda9 Oct 13 04:59:01.705582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 04:59:01.707568 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 04:59:01.709851 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 04:59:01.719493 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 04:59:01.723395 jq[1539]: true Oct 13 04:59:01.721724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 04:59:01.721887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 04:59:01.722116 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 04:59:01.722418 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 04:59:01.725174 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 04:59:01.725374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 04:59:01.739698 extend-filesystems[1523]: Resized partition /dev/vda9 Oct 13 04:59:01.743575 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 04:59:01.748753 update_engine[1538]: I20251013 04:59:01.747771 1538 main.cc:92] Flatcar Update Engine starting Oct 13 04:59:01.752294 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 04:59:01.757586 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 04:59:01.759326 tar[1550]: linux-arm64/LICENSE Oct 13 04:59:01.759326 tar[1550]: linux-arm64/helm Oct 13 04:59:01.763873 jq[1552]: true Oct 13 04:59:01.768561 dbus-daemon[1520]: [system] SELinux support is enabled Oct 13 04:59:01.770168 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 04:59:01.773521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 04:59:01.773549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 04:59:01.774632 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 04:59:01.774707 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 04:59:01.780754 update_engine[1538]: I20251013 04:59:01.780019 1538 update_check_scheduler.cc:74] Next update check in 9m16s Oct 13 04:59:01.779951 systemd[1]: Started update-engine.service - Update Engine. Oct 13 04:59:01.782183 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 04:59:01.789427 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 04:59:01.807358 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 04:59:01.807358 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 04:59:01.807358 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 04:59:01.811217 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Oct 13 04:59:01.809386 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 04:59:01.809595 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 04:59:01.817538 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (Power Button) Oct 13 04:59:01.817741 systemd-logind[1533]: New seat seat0. Oct 13 04:59:01.818801 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 04:59:01.831960 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Oct 13 04:59:01.835425 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 04:59:01.837981 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 04:59:01.841298 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 04:59:01.910972 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 04:59:01.931545 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 04:59:01.935185 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 04:59:01.946582 containerd[1566]: time="2025-10-13T04:59:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 04:59:01.947478 containerd[1566]: time="2025-10-13T04:59:01.947433640Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 04:59:01.953753 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 04:59:01.953989 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 04:59:01.956265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 04:59:01.959376 containerd[1566]: time="2025-10-13T04:59:01.959210440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.8µs" Oct 13 04:59:01.959376 containerd[1566]: time="2025-10-13T04:59:01.959243440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 04:59:01.959376 containerd[1566]: time="2025-10-13T04:59:01.959324840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 04:59:01.959614 containerd[1566]: time="2025-10-13T04:59:01.959576520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 04:59:01.959614 containerd[1566]: time="2025-10-13T04:59:01.959605200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 04:59:01.959699 containerd[1566]: time="2025-10-13T04:59:01.959682160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:59:01.959957 containerd[1566]: time="2025-10-13T04:59:01.959923800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 04:59:01.959957 containerd[1566]: time="2025-10-13T04:59:01.959950880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960313 containerd[1566]: time="2025-10-13T04:59:01.960248320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960341 containerd[1566]: time="2025-10-13T04:59:01.960313640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960341 containerd[1566]: time="2025-10-13T04:59:01.960327600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960375 containerd[1566]: time="2025-10-13T04:59:01.960346120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960460 containerd[1566]: time="2025-10-13T04:59:01.960444480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960643 containerd[1566]: time="2025-10-13T04:59:01.960625000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960673 containerd[1566]: time="2025-10-13T04:59:01.960660080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 04:59:01.960691 containerd[1566]: time="2025-10-13T04:59:01.960676640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 04:59:01.960716 containerd[1566]: time="2025-10-13T04:59:01.960706240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 04:59:01.962370 containerd[1566]: time="2025-10-13T04:59:01.961520360Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 04:59:01.962370 containerd[1566]: time="2025-10-13T04:59:01.961911000Z" level=info msg="metadata content store policy set" policy=shared Oct 13 04:59:01.966002 containerd[1566]: time="2025-10-13T04:59:01.965915400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 04:59:01.966048 containerd[1566]: time="2025-10-13T04:59:01.966032280Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 04:59:01.966068 containerd[1566]: time="2025-10-13T04:59:01.966052080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 04:59:01.966085 containerd[1566]: time="2025-10-13T04:59:01.966065240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 04:59:01.966188 containerd[1566]: time="2025-10-13T04:59:01.966169680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 04:59:01.966211 containerd[1566]: time="2025-10-13T04:59:01.966190800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 04:59:01.966263 containerd[1566]: time="2025-10-13T04:59:01.966204240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 04:59:01.966304 containerd[1566]: time="2025-10-13T04:59:01.966271560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 04:59:01.966304 containerd[1566]: time="2025-10-13T04:59:01.966291960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 04:59:01.966383 containerd[1566]: time="2025-10-13T04:59:01.966304600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 04:59:01.966408 containerd[1566]: time="2025-10-13T04:59:01.966386040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 04:59:01.966408 containerd[1566]: time="2025-10-13T04:59:01.966402520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 04:59:01.966674 containerd[1566]: time="2025-10-13T04:59:01.966651640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 04:59:01.966706 containerd[1566]: time="2025-10-13T04:59:01.966694160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 04:59:01.966769 containerd[1566]: time="2025-10-13T04:59:01.966754560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 04:59:01.966787 containerd[1566]: time="2025-10-13T04:59:01.966774600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 04:59:01.966804 containerd[1566]: time="2025-10-13T04:59:01.966785760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 04:59:01.966804 containerd[1566]: time="2025-10-13T04:59:01.966796560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 04:59:01.966840 containerd[1566]: time="2025-10-13T04:59:01.966808200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 04:59:01.966840 containerd[1566]: time="2025-10-13T04:59:01.966826000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 04:59:01.966877 containerd[1566]: time="2025-10-13T04:59:01.966839600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 04:59:01.966920 containerd[1566]: time="2025-10-13T04:59:01.966904200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 04:59:01.966939 containerd[1566]: time="2025-10-13T04:59:01.966928440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 04:59:01.967215 containerd[1566]: time="2025-10-13T04:59:01.967196320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 04:59:01.967235 containerd[1566]: time="2025-10-13T04:59:01.967222960Z" level=info msg="Start snapshots syncer" Oct 13 04:59:01.967360 containerd[1566]: time="2025-10-13T04:59:01.967331640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 04:59:01.967758 containerd[1566]: time="2025-10-13T04:59:01.967719960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 04:59:01.967834 containerd[1566]: time="2025-10-13T04:59:01.967785600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 04:59:01.968181 containerd[1566]: time="2025-10-13T04:59:01.968097440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 04:59:01.968510 containerd[1566]: time="2025-10-13T04:59:01.968482640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 04:59:01.968576 containerd[1566]: time="2025-10-13T04:59:01.968561240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 04:59:01.968596 containerd[1566]: time="2025-10-13T04:59:01.968581000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 04:59:01.968596 containerd[1566]: time="2025-10-13T04:59:01.968592840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 04:59:01.968657 containerd[1566]: time="2025-10-13T04:59:01.968642520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 04:59:01.968675 containerd[1566]: time="2025-10-13T04:59:01.968662480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 04:59:01.968701 containerd[1566]: time="2025-10-13T04:59:01.968675120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 04:59:01.968757 containerd[1566]: time="2025-10-13T04:59:01.968741000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 04:59:01.968824 containerd[1566]: time="2025-10-13T04:59:01.968764120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 04:59:01.968843 containerd[1566]: time="2025-10-13T04:59:01.968831240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 04:59:01.968932 containerd[1566]: time="2025-10-13T04:59:01.968915720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:59:01.969078 containerd[1566]: time="2025-10-13T04:59:01.968938360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 04:59:01.969106 containerd[1566]: time="2025-10-13T04:59:01.969095320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:59:01.969176 containerd[1566]: time="2025-10-13T04:59:01.969159440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 04:59:01.969230 containerd[1566]: time="2025-10-13T04:59:01.969177160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 04:59:01.969250 containerd[1566]: time="2025-10-13T04:59:01.969241840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 04:59:01.969287 containerd[1566]: time="2025-10-13T04:59:01.969275800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 04:59:01.969430 containerd[1566]: time="2025-10-13T04:59:01.969412120Z" level=info msg="runtime interface created" Oct 13 04:59:01.969430 containerd[1566]: time="2025-10-13T04:59:01.969428000Z" level=info msg="created NRI interface" Oct 13 04:59:01.969479 containerd[1566]: time="2025-10-13T04:59:01.969443200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 04:59:01.969479 containerd[1566]: time="2025-10-13T04:59:01.969457520Z" level=info msg="Connect containerd service" Oct 13 04:59:01.969511 containerd[1566]: time="2025-10-13T04:59:01.969488480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 04:59:01.971284 containerd[1566]: time="2025-10-13T04:59:01.970871360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 04:59:01.971551 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 04:59:01.976021 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 04:59:01.978249 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 04:59:01.980390 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 04:59:02.034798 containerd[1566]: time="2025-10-13T04:59:02.034675415Z" level=info msg="Start subscribing containerd event" Oct 13 04:59:02.034798 containerd[1566]: time="2025-10-13T04:59:02.034745033Z" level=info msg="Start recovering state" Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034832682Z" level=info msg="Start event monitor" Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034846957Z" level=info msg="Start cni network conf syncer for default" Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034856348Z" level=info msg="Start streaming server" Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034865530Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034872166Z" level=info msg="runtime interface starting up..." Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034891324Z" level=info msg="starting plugins..." Oct 13 04:59:02.034914 containerd[1566]: time="2025-10-13T04:59:02.034904429Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 04:59:02.035033 containerd[1566]: time="2025-10-13T04:59:02.034839152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 04:59:02.035052 containerd[1566]: time="2025-10-13T04:59:02.035031145Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 04:59:02.035216 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 04:59:02.036167 containerd[1566]: time="2025-10-13T04:59:02.036148169Z" level=info msg="containerd successfully booted in 0.089985s" Oct 13 04:59:02.078950 tar[1550]: linux-arm64/README.md Oct 13 04:59:02.099411 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 04:59:02.984578 systemd-networkd[1466]: eth0: Gained IPv6LL Oct 13 04:59:02.986822 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 04:59:02.988261 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 04:59:02.993838 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 04:59:02.996439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:02.998272 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 04:59:03.036523 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 04:59:03.038027 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 04:59:03.038299 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 04:59:03.040198 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 04:59:03.606603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:03.607925 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 04:59:03.608979 systemd[1]: Startup finished in 1.225s (kernel) + 4.950s (initrd) + 3.566s (userspace) = 9.743s. Oct 13 04:59:03.610264 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:03.962555 kubelet[1658]: E1013 04:59:03.962496 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:03.964805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:03.964944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:03.965356 systemd[1]: kubelet.service: Consumed 739ms CPU time, 256.8M memory peak. Oct 13 04:59:06.486845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 04:59:06.487965 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). Oct 13 04:59:06.579585 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:06.581803 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:06.595813 systemd-logind[1533]: New session 1 of user core. Oct 13 04:59:06.596866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 04:59:06.598092 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 04:59:06.630512 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 04:59:06.632891 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 04:59:06.648519 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 04:59:06.651009 systemd-logind[1533]: New session c1 of user core. Oct 13 04:59:06.753538 systemd[1676]: Queued start job for default target default.target. Oct 13 04:59:06.777304 systemd[1676]: Created slice app.slice - User Application Slice. Oct 13 04:59:06.777334 systemd[1676]: Reached target paths.target - Paths. Oct 13 04:59:06.777375 systemd[1676]: Reached target timers.target - Timers. Oct 13 04:59:06.778673 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 04:59:06.790322 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 04:59:06.790436 systemd[1676]: Reached target sockets.target - Sockets. Oct 13 04:59:06.790479 systemd[1676]: Reached target basic.target - Basic System. Oct 13 04:59:06.790509 systemd[1676]: Reached target default.target - Main User Target. Oct 13 04:59:06.790540 systemd[1676]: Startup finished in 133ms. Oct 13 04:59:06.790688 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 04:59:06.792017 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 04:59:06.854215 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:36896.service - OpenSSH per-connection server daemon (10.0.0.1:36896). Oct 13 04:59:06.924530 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36896 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:06.926162 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:06.930100 systemd-logind[1533]: New session 2 of user core. Oct 13 04:59:06.942528 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 04:59:06.996307 sshd[1690]: Connection closed by 10.0.0.1 port 36896 Oct 13 04:59:06.996329 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:07.006341 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:36896.service: Deactivated successfully. Oct 13 04:59:07.009587 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 04:59:07.010401 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Oct 13 04:59:07.014028 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Oct 13 04:59:07.015680 systemd-logind[1533]: Removed session 2. Oct 13 04:59:07.071196 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:07.073493 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.078105 systemd-logind[1533]: New session 3 of user core. Oct 13 04:59:07.091524 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 04:59:07.140808 sshd[1700]: Connection closed by 10.0.0.1 port 36910 Oct 13 04:59:07.141321 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:07.153780 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:36910.service: Deactivated successfully. Oct 13 04:59:07.156901 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 04:59:07.157625 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Oct 13 04:59:07.160331 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:36914.service - OpenSSH per-connection server daemon (10.0.0.1:36914). Oct 13 04:59:07.160993 systemd-logind[1533]: Removed session 3. Oct 13 04:59:07.220011 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 36914 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:07.221391 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.225128 systemd-logind[1533]: New session 4 of user core. Oct 13 04:59:07.233454 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 04:59:07.286114 sshd[1711]: Connection closed by 10.0.0.1 port 36914 Oct 13 04:59:07.285907 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:07.305599 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:36914.service: Deactivated successfully. Oct 13 04:59:07.307264 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 04:59:07.308071 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Oct 13 04:59:07.310515 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:36924.service - OpenSSH per-connection server daemon (10.0.0.1:36924). Oct 13 04:59:07.310979 systemd-logind[1533]: Removed session 4. Oct 13 04:59:07.364443 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:07.365716 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.369636 systemd-logind[1533]: New session 5 of user core. Oct 13 04:59:07.385475 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 04:59:07.444493 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 04:59:07.445129 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:07.458794 sudo[1721]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:07.462197 sshd[1720]: Connection closed by 10.0.0.1 port 36924 Oct 13 04:59:07.461244 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:07.471673 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:36924.service: Deactivated successfully. Oct 13 04:59:07.474896 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 04:59:07.475656 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Oct 13 04:59:07.478249 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:36928.service - OpenSSH per-connection server daemon (10.0.0.1:36928). Oct 13 04:59:07.478868 systemd-logind[1533]: Removed session 5. Oct 13 04:59:07.543326 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 36928 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:07.544903 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.548868 systemd-logind[1533]: New session 6 of user core. Oct 13 04:59:07.555454 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 04:59:07.609404 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 04:59:07.609677 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:07.658077 sudo[1732]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:07.664544 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 04:59:07.664816 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:07.674075 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 04:59:07.723543 augenrules[1754]: No rules Oct 13 04:59:07.724823 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 04:59:07.726331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 04:59:07.727657 sudo[1731]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:07.729326 sshd[1730]: Connection closed by 10.0.0.1 port 36928 Oct 13 04:59:07.729446 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:07.745223 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:36928.service: Deactivated successfully. Oct 13 04:59:07.748761 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 04:59:07.750386 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Oct 13 04:59:07.751992 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:36942.service - OpenSSH per-connection server daemon (10.0.0.1:36942). Oct 13 04:59:07.753263 systemd-logind[1533]: Removed session 6. Oct 13 04:59:07.804867 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 36942 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 04:59:07.806088 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 04:59:07.810361 systemd-logind[1533]: New session 7 of user core. Oct 13 04:59:07.820463 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 04:59:07.873587 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 04:59:07.873860 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 04:59:08.153263 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 04:59:08.177646 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 04:59:08.403825 dockerd[1788]: time="2025-10-13T04:59:08.403756888Z" level=info msg="Starting up" Oct 13 04:59:08.404824 dockerd[1788]: time="2025-10-13T04:59:08.404735359Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 04:59:08.417199 dockerd[1788]: time="2025-10-13T04:59:08.417155800Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 04:59:08.524079 dockerd[1788]: time="2025-10-13T04:59:08.523912543Z" level=info msg="Loading containers: start." Oct 13 04:59:08.533292 kernel: Initializing XFRM netlink socket Oct 13 04:59:08.733858 systemd-networkd[1466]: docker0: Link UP Oct 13 04:59:08.737720 dockerd[1788]: time="2025-10-13T04:59:08.737664308Z" level=info msg="Loading containers: done." Oct 13 04:59:08.752501 dockerd[1788]: time="2025-10-13T04:59:08.752440029Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 04:59:08.752651 dockerd[1788]: time="2025-10-13T04:59:08.752536065Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 04:59:08.752651 dockerd[1788]: time="2025-10-13T04:59:08.752626801Z" level=info msg="Initializing buildkit" Oct 13 04:59:08.773613 dockerd[1788]: time="2025-10-13T04:59:08.773555589Z" level=info msg="Completed buildkit initialization" Oct 13 04:59:08.780810 dockerd[1788]: time="2025-10-13T04:59:08.780651567Z" level=info msg="Daemon has completed initialization" Oct 13 04:59:08.780910 dockerd[1788]: time="2025-10-13T04:59:08.780726561Z" level=info msg="API listen on /run/docker.sock" Oct 13 04:59:08.780946 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 04:59:09.404563 containerd[1566]: time="2025-10-13T04:59:09.404526596Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 04:59:09.504983 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3533140169-merged.mount: Deactivated successfully. Oct 13 04:59:09.983015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583243930.mount: Deactivated successfully. Oct 13 04:59:10.879889 containerd[1566]: time="2025-10-13T04:59:10.879842252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:10.880850 containerd[1566]: time="2025-10-13T04:59:10.880782480Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 13 04:59:10.881872 containerd[1566]: time="2025-10-13T04:59:10.881424726Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:10.884593 containerd[1566]: time="2025-10-13T04:59:10.884563732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:10.885523 containerd[1566]: time="2025-10-13T04:59:10.885492512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.480924846s" Oct 13 04:59:10.885586 containerd[1566]: time="2025-10-13T04:59:10.885531282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 13 04:59:10.886172 containerd[1566]: time="2025-10-13T04:59:10.886138736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 04:59:11.944582 containerd[1566]: time="2025-10-13T04:59:11.944534054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:11.945491 containerd[1566]: time="2025-10-13T04:59:11.945459303Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 13 04:59:11.946816 containerd[1566]: time="2025-10-13T04:59:11.946143687Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:11.948699 containerd[1566]: time="2025-10-13T04:59:11.948667910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:11.950583 containerd[1566]: time="2025-10-13T04:59:11.950549164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.064275581s" Oct 13 04:59:11.950698 containerd[1566]: time="2025-10-13T04:59:11.950680457Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 13 04:59:11.951146 containerd[1566]: time="2025-10-13T04:59:11.951124016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 04:59:13.228560 containerd[1566]: time="2025-10-13T04:59:13.228511363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:13.229358 containerd[1566]: time="2025-10-13T04:59:13.229322918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 13 04:59:13.229861 containerd[1566]: time="2025-10-13T04:59:13.229832039Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:13.232858 containerd[1566]: time="2025-10-13T04:59:13.232823456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:13.233911 containerd[1566]: time="2025-10-13T04:59:13.233685229Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.282531251s" Oct 13 04:59:13.233911 containerd[1566]: time="2025-10-13T04:59:13.233717670Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 13 04:59:13.234267 containerd[1566]: time="2025-10-13T04:59:13.234236002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 04:59:14.018362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 04:59:14.021449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:14.167424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:14.180708 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 04:59:14.224485 kubelet[2086]: E1013 04:59:14.224420 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 04:59:14.230237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 04:59:14.230397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 04:59:14.230702 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108.1M memory peak. Oct 13 04:59:14.305518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851121145.mount: Deactivated successfully. Oct 13 04:59:14.611391 containerd[1566]: time="2025-10-13T04:59:14.611283957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:14.611804 containerd[1566]: time="2025-10-13T04:59:14.611772152Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 13 04:59:14.612708 containerd[1566]: time="2025-10-13T04:59:14.612665218Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:14.614366 containerd[1566]: time="2025-10-13T04:59:14.614315896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:14.615001 containerd[1566]: time="2025-10-13T04:59:14.614961415Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.380693547s" Oct 13 04:59:14.615033 containerd[1566]: time="2025-10-13T04:59:14.614993372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 13 04:59:14.615626 containerd[1566]: time="2025-10-13T04:59:14.615605280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 04:59:15.278727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376880238.mount: Deactivated successfully. Oct 13 04:59:16.102651 containerd[1566]: time="2025-10-13T04:59:16.102584009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:16.103795 containerd[1566]: time="2025-10-13T04:59:16.103759914Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 13 04:59:16.105275 containerd[1566]: time="2025-10-13T04:59:16.104893054Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:16.108174 containerd[1566]: time="2025-10-13T04:59:16.108147551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:16.109966 containerd[1566]: time="2025-10-13T04:59:16.109883423Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.494247976s" Oct 13 04:59:16.109966 containerd[1566]: time="2025-10-13T04:59:16.109931261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 13 04:59:16.110337 containerd[1566]: time="2025-10-13T04:59:16.110312679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 04:59:16.540765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753695372.mount: Deactivated successfully. Oct 13 04:59:16.545560 containerd[1566]: time="2025-10-13T04:59:16.545513753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:16.546047 containerd[1566]: time="2025-10-13T04:59:16.546012674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 13 04:59:16.546891 containerd[1566]: time="2025-10-13T04:59:16.546843885Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:16.548788 containerd[1566]: time="2025-10-13T04:59:16.548732090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 04:59:16.549449 containerd[1566]: time="2025-10-13T04:59:16.549286982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 438.943216ms" Oct 13 04:59:16.549449 containerd[1566]: time="2025-10-13T04:59:16.549319036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 13 04:59:16.549842 containerd[1566]: time="2025-10-13T04:59:16.549816868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 04:59:17.037932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482146051.mount: Deactivated successfully. Oct 13 04:59:18.642656 containerd[1566]: time="2025-10-13T04:59:18.642600300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:18.643048 containerd[1566]: time="2025-10-13T04:59:18.643013529Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 13 04:59:18.643977 containerd[1566]: time="2025-10-13T04:59:18.643931814Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:18.646550 containerd[1566]: time="2025-10-13T04:59:18.646515277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:18.648572 containerd[1566]: time="2025-10-13T04:59:18.648526822Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.098611955s" Oct 13 04:59:18.648572 containerd[1566]: time="2025-10-13T04:59:18.648569600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 13 04:59:23.583679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:23.583830 systemd[1]: kubelet.service: Consumed 145ms CPU time, 108.1M memory peak. Oct 13 04:59:23.585588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:23.605385 systemd[1]: Reload requested from client PID 2238 ('systemctl') (unit session-7.scope)... Oct 13 04:59:23.605403 systemd[1]: Reloading... Oct 13 04:59:23.665299 zram_generator::config[2283]: No configuration found. Oct 13 04:59:23.823870 systemd[1]: Reloading finished in 218 ms. Oct 13 04:59:23.875883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:23.877952 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 04:59:23.878160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:23.878205 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.3M memory peak. Oct 13 04:59:23.879471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:24.006937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:24.010325 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 04:59:24.042990 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:59:24.042990 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 04:59:24.042990 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:59:24.043297 kubelet[2330]: I1013 04:59:24.043039 2330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 04:59:24.840947 kubelet[2330]: I1013 04:59:24.840887 2330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 04:59:24.840947 kubelet[2330]: I1013 04:59:24.840919 2330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 04:59:24.841366 kubelet[2330]: I1013 04:59:24.841342 2330 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 04:59:24.866306 kubelet[2330]: E1013 04:59:24.865957 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:24.866499 kubelet[2330]: I1013 04:59:24.866486 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 04:59:24.872688 kubelet[2330]: I1013 04:59:24.872669 2330 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 04:59:24.875233 kubelet[2330]: I1013 04:59:24.875216 2330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 04:59:24.876409 kubelet[2330]: I1013 04:59:24.876368 2330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 04:59:24.876561 kubelet[2330]: I1013 04:59:24.876405 2330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 04:59:24.876649 kubelet[2330]: I1013 04:59:24.876632 2330 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 04:59:24.876649 kubelet[2330]: I1013 04:59:24.876642 2330 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 04:59:24.876830 kubelet[2330]: I1013 04:59:24.876816 2330 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:59:24.879148 kubelet[2330]: I1013 04:59:24.879104 2330 kubelet.go:446] "Attempting to sync node with API server" Oct 13 04:59:24.879148 kubelet[2330]: I1013 04:59:24.879124 2330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 04:59:24.879148 kubelet[2330]: I1013 04:59:24.879150 2330 kubelet.go:352] "Adding apiserver pod source" Oct 13 04:59:24.879311 kubelet[2330]: I1013 04:59:24.879160 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 04:59:24.883281 kubelet[2330]: W1013 04:59:24.883158 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:24.883281 kubelet[2330]: E1013 04:59:24.883228 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:24.884340 kubelet[2330]: W1013 04:59:24.884290 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:24.884402 kubelet[2330]: E1013 04:59:24.884350 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:24.885118 kubelet[2330]: I1013 04:59:24.885079 2330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 04:59:24.886279 kubelet[2330]: I1013 04:59:24.886062 2330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 04:59:24.886279 kubelet[2330]: W1013 04:59:24.886204 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 04:59:24.887019 kubelet[2330]: I1013 04:59:24.887003 2330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 04:59:24.887065 kubelet[2330]: I1013 04:59:24.887038 2330 server.go:1287] "Started kubelet" Oct 13 04:59:24.887175 kubelet[2330]: I1013 04:59:24.887127 2330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 04:59:24.887580 kubelet[2330]: I1013 04:59:24.887526 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 04:59:24.887796 kubelet[2330]: I1013 04:59:24.887769 2330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 04:59:24.888203 kubelet[2330]: I1013 04:59:24.888185 2330 server.go:479] "Adding debug handlers to kubelet server" Oct 13 04:59:24.889326 kubelet[2330]: I1013 04:59:24.889301 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 04:59:24.889920 kubelet[2330]: E1013 04:59:24.889616 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df43b4bff89c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 04:59:24.887017925 +0000 UTC m=+0.873923826,LastTimestamp:2025-10-13 04:59:24.887017925 +0000 UTC m=+0.873923826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 04:59:24.890039 kubelet[2330]: I1013 04:59:24.889929 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 04:59:24.890528 kubelet[2330]: E1013 04:59:24.890213 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:24.890528 kubelet[2330]: I1013 04:59:24.890251 2330 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 04:59:24.890528 kubelet[2330]: I1013 04:59:24.890426 2330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 04:59:24.890528 kubelet[2330]: I1013 04:59:24.890471 2330 reconciler.go:26] "Reconciler: start to sync state" Oct 13 04:59:24.890776 kubelet[2330]: W1013 04:59:24.890728 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:24.890823 kubelet[2330]: E1013 04:59:24.890774 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:24.892279 kubelet[2330]: I1013 04:59:24.891557 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 04:59:24.892279 kubelet[2330]: E1013 04:59:24.891850 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Oct 13 04:59:24.893039 kubelet[2330]: I1013 04:59:24.893016 2330 factory.go:221] Registration of the containerd container factory successfully Oct 13 04:59:24.893039 kubelet[2330]: I1013 04:59:24.893039 2330 factory.go:221] Registration of the systemd container factory successfully Oct 13 04:59:24.896482 kubelet[2330]: E1013 04:59:24.896455 2330 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 04:59:24.904387 kubelet[2330]: I1013 04:59:24.904325 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 04:59:24.905291 kubelet[2330]: I1013 04:59:24.905252 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 04:59:24.905291 kubelet[2330]: I1013 04:59:24.905287 2330 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 04:59:24.905377 kubelet[2330]: I1013 04:59:24.905302 2330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 04:59:24.905377 kubelet[2330]: I1013 04:59:24.905309 2330 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 04:59:24.905377 kubelet[2330]: E1013 04:59:24.905346 2330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 04:59:24.906658 kubelet[2330]: I1013 04:59:24.906602 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 04:59:24.906658 kubelet[2330]: I1013 04:59:24.906617 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 04:59:24.906658 kubelet[2330]: I1013 04:59:24.906633 2330 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:59:24.908164 kubelet[2330]: W1013 04:59:24.908099 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:24.908164 kubelet[2330]: E1013 04:59:24.908145 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:24.991086 kubelet[2330]: E1013 04:59:24.991035 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:25.006229 kubelet[2330]: E1013 04:59:25.006194 2330 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 04:59:25.029350 kubelet[2330]: I1013 04:59:25.029275 2330 policy_none.go:49] "None policy: Start" Oct 13 04:59:25.029350 kubelet[2330]: I1013 04:59:25.029300 2330 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 04:59:25.029350 kubelet[2330]: I1013 04:59:25.029313 2330 state_mem.go:35] "Initializing new in-memory state store" Oct 13 04:59:25.035352 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 04:59:25.047730 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 04:59:25.050440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 04:59:25.062073 kubelet[2330]: I1013 04:59:25.061888 2330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 04:59:25.062073 kubelet[2330]: I1013 04:59:25.062062 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 04:59:25.062352 kubelet[2330]: I1013 04:59:25.062072 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 04:59:25.062352 kubelet[2330]: I1013 04:59:25.062331 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 04:59:25.063578 kubelet[2330]: E1013 04:59:25.063554 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 04:59:25.063628 kubelet[2330]: E1013 04:59:25.063592 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 04:59:25.093306 kubelet[2330]: E1013 04:59:25.093077 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Oct 13 04:59:25.164776 kubelet[2330]: I1013 04:59:25.163671 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:59:25.164776 kubelet[2330]: E1013 04:59:25.164071 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 13 04:59:25.213691 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 13 04:59:25.240592 kubelet[2330]: E1013 04:59:25.240562 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.243411 systemd[1]: Created slice kubepods-burstable-pod8b984ba4a9272739b1c8843a7556dc47.slice - libcontainer container kubepods-burstable-pod8b984ba4a9272739b1c8843a7556dc47.slice. Oct 13 04:59:25.263453 kubelet[2330]: E1013 04:59:25.263351 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.265682 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 13 04:59:25.267167 kubelet[2330]: E1013 04:59:25.267150 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.292531 kubelet[2330]: I1013 04:59:25.292508 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 04:59:25.292602 kubelet[2330]: I1013 04:59:25.292539 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:25.292602 kubelet[2330]: I1013 04:59:25.292588 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:25.292710 kubelet[2330]: I1013 04:59:25.292606 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:25.292710 kubelet[2330]: I1013 04:59:25.292620 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:25.292710 kubelet[2330]: I1013 04:59:25.292666 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:25.292710 kubelet[2330]: I1013 04:59:25.292693 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:25.292836 kubelet[2330]: I1013 04:59:25.292732 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:25.292836 kubelet[2330]: I1013 04:59:25.292750 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:25.365681 kubelet[2330]: I1013 04:59:25.365608 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:59:25.367078 kubelet[2330]: E1013 04:59:25.367018 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 13 04:59:25.494062 kubelet[2330]: E1013 04:59:25.494029 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Oct 13 04:59:25.541338 kubelet[2330]: E1013 04:59:25.541306 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.541867 containerd[1566]: time="2025-10-13T04:59:25.541818915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:25.556915 containerd[1566]: time="2025-10-13T04:59:25.556882153Z" level=info msg="connecting to shim a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b" address="unix:///run/containerd/s/b891a9516c0ca013bcc42fa5ae786f9828e4ea0bc6418e603fe21efe0943ef9a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:25.564174 kubelet[2330]: E1013 04:59:25.564088 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.564541 containerd[1566]: time="2025-10-13T04:59:25.564504154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b984ba4a9272739b1c8843a7556dc47,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:25.568152 kubelet[2330]: E1013 04:59:25.568079 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.568452 containerd[1566]: time="2025-10-13T04:59:25.568425476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:25.584438 systemd[1]: Started cri-containerd-a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b.scope - libcontainer container a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b. Oct 13 04:59:25.601769 containerd[1566]: time="2025-10-13T04:59:25.601621122Z" level=info msg="connecting to shim ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178" address="unix:///run/containerd/s/0f16cb7b50d28eb7931c228fb8839fae72face1c2a8b391e65d91461dc53fde1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:25.601769 containerd[1566]: time="2025-10-13T04:59:25.601740722Z" level=info msg="connecting to shim bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe" address="unix:///run/containerd/s/a4739ff2586772b4639d3f1431c10ab38d8516cff379d47f34d20cfb0fa6d0e3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:25.622454 systemd[1]: Started cri-containerd-ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178.scope - libcontainer container ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178. Oct 13 04:59:25.625409 systemd[1]: Started cri-containerd-bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe.scope - libcontainer container bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe. Oct 13 04:59:25.636738 containerd[1566]: time="2025-10-13T04:59:25.636695183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b\"" Oct 13 04:59:25.637861 kubelet[2330]: E1013 04:59:25.637830 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.640735 containerd[1566]: time="2025-10-13T04:59:25.640691456Z" level=info msg="CreateContainer within sandbox \"a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 04:59:25.646792 containerd[1566]: time="2025-10-13T04:59:25.646759934Z" level=info msg="Container 762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:25.654069 containerd[1566]: time="2025-10-13T04:59:25.654011590Z" level=info msg="CreateContainer within sandbox \"a017416c93a30480ca5e90c8661117a4092db4d6b1dd942cc0017b2f1fb0dc7b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11\"" Oct 13 04:59:25.654868 containerd[1566]: time="2025-10-13T04:59:25.654836168Z" level=info msg="StartContainer for \"762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11\"" Oct 13 04:59:25.655887 containerd[1566]: time="2025-10-13T04:59:25.655846639Z" level=info msg="connecting to shim 762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11" address="unix:///run/containerd/s/b891a9516c0ca013bcc42fa5ae786f9828e4ea0bc6418e603fe21efe0943ef9a" protocol=ttrpc version=3 Oct 13 04:59:25.664171 containerd[1566]: time="2025-10-13T04:59:25.664126803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178\"" Oct 13 04:59:25.664807 kubelet[2330]: E1013 04:59:25.664778 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.665589 containerd[1566]: time="2025-10-13T04:59:25.665560204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b984ba4a9272739b1c8843a7556dc47,Namespace:kube-system,Attempt:0,} returns sandbox id \"bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe\"" Oct 13 04:59:25.666039 kubelet[2330]: E1013 04:59:25.666015 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.667174 containerd[1566]: time="2025-10-13T04:59:25.666801098Z" level=info msg="CreateContainer within sandbox \"ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 04:59:25.667997 containerd[1566]: time="2025-10-13T04:59:25.667935298Z" level=info msg="CreateContainer within sandbox \"bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 04:59:25.675019 containerd[1566]: time="2025-10-13T04:59:25.674977894Z" level=info msg="Container de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:25.677904 containerd[1566]: time="2025-10-13T04:59:25.677840648Z" level=info msg="Container 9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:25.683482 systemd[1]: Started cri-containerd-762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11.scope - libcontainer container 762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11. Oct 13 04:59:25.693800 containerd[1566]: time="2025-10-13T04:59:25.693754036Z" level=info msg="CreateContainer within sandbox \"bda941398976c942c2222e6d5a22b4d9203c584abc56e9bfaa78124dd2edfefe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458\"" Oct 13 04:59:25.694286 containerd[1566]: time="2025-10-13T04:59:25.694246305Z" level=info msg="StartContainer for \"de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458\"" Oct 13 04:59:25.695410 containerd[1566]: time="2025-10-13T04:59:25.695380425Z" level=info msg="connecting to shim de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458" address="unix:///run/containerd/s/a4739ff2586772b4639d3f1431c10ab38d8516cff379d47f34d20cfb0fa6d0e3" protocol=ttrpc version=3 Oct 13 04:59:25.696010 containerd[1566]: time="2025-10-13T04:59:25.695969649Z" level=info msg="CreateContainer within sandbox \"ab95e4bc733e1647cab549a074c0f5bc8d2303a2d57fcf21bed96c2d42ae6178\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba\"" Oct 13 04:59:25.696429 containerd[1566]: time="2025-10-13T04:59:25.696370575Z" level=info msg="StartContainer for \"9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba\"" Oct 13 04:59:25.697727 containerd[1566]: time="2025-10-13T04:59:25.697696200Z" level=info msg="connecting to shim 9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba" address="unix:///run/containerd/s/0f16cb7b50d28eb7931c228fb8839fae72face1c2a8b391e65d91461dc53fde1" protocol=ttrpc version=3 Oct 13 04:59:25.715426 systemd[1]: Started cri-containerd-de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458.scope - libcontainer container de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458. Oct 13 04:59:25.718693 systemd[1]: Started cri-containerd-9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba.scope - libcontainer container 9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba. Oct 13 04:59:25.728289 containerd[1566]: time="2025-10-13T04:59:25.727121146Z" level=info msg="StartContainer for \"762b25750cfaed391d184b6307cc6c93743914791c15e8c21854c7002a587a11\" returns successfully" Oct 13 04:59:25.740067 kubelet[2330]: W1013 04:59:25.740008 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:25.740136 kubelet[2330]: E1013 04:59:25.740073 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:25.759461 containerd[1566]: time="2025-10-13T04:59:25.759418065Z" level=info msg="StartContainer for \"de7476308e5788984732a4bade7f9e2233621ebdb42bccd6a1039a60f856a458\" returns successfully" Oct 13 04:59:25.769134 kubelet[2330]: I1013 04:59:25.769102 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:59:25.769679 kubelet[2330]: E1013 04:59:25.769652 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 13 04:59:25.772444 containerd[1566]: time="2025-10-13T04:59:25.772413066Z" level=info msg="StartContainer for \"9724cebc17095d982fc635c894a5ab2cd4858da0baf3f29374b0591b0d7ffbba\" returns successfully" Oct 13 04:59:25.805034 kubelet[2330]: W1013 04:59:25.804980 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 13 04:59:25.805147 kubelet[2330]: E1013 04:59:25.805042 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Oct 13 04:59:25.911653 kubelet[2330]: E1013 04:59:25.911347 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.911653 kubelet[2330]: E1013 04:59:25.911475 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.915289 kubelet[2330]: E1013 04:59:25.914483 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.915289 kubelet[2330]: E1013 04:59:25.914582 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:25.917106 kubelet[2330]: E1013 04:59:25.917084 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:25.917338 kubelet[2330]: E1013 04:59:25.917318 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:26.572000 kubelet[2330]: I1013 04:59:26.571673 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:59:26.918861 kubelet[2330]: E1013 04:59:26.918631 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:26.919623 kubelet[2330]: E1013 04:59:26.919465 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:26.919623 kubelet[2330]: E1013 04:59:26.919008 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 04:59:26.919623 kubelet[2330]: E1013 04:59:26.919575 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:28.211298 kubelet[2330]: E1013 04:59:28.210683 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 04:59:28.293318 kubelet[2330]: I1013 04:59:28.293226 2330 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 04:59:28.293318 kubelet[2330]: E1013 04:59:28.293291 2330 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 04:59:28.301343 kubelet[2330]: E1013 04:59:28.301316 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.402391 kubelet[2330]: E1013 04:59:28.402353 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.503325 kubelet[2330]: E1013 04:59:28.503211 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.603869 kubelet[2330]: E1013 04:59:28.603836 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.704527 kubelet[2330]: E1013 04:59:28.704474 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.805344 kubelet[2330]: E1013 04:59:28.805184 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:28.884392 kubelet[2330]: I1013 04:59:28.884324 2330 apiserver.go:52] "Watching apiserver" Oct 13 04:59:28.891474 kubelet[2330]: I1013 04:59:28.891445 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:28.891544 kubelet[2330]: I1013 04:59:28.891476 2330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 04:59:28.897005 kubelet[2330]: E1013 04:59:28.896964 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:28.897005 kubelet[2330]: I1013 04:59:28.896992 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:28.898540 kubelet[2330]: E1013 04:59:28.898498 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:28.898540 kubelet[2330]: I1013 04:59:28.898522 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 04:59:28.900063 kubelet[2330]: E1013 04:59:28.900035 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 04:59:28.949096 kubelet[2330]: I1013 04:59:28.948966 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:28.950627 kubelet[2330]: E1013 04:59:28.950476 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:28.950719 kubelet[2330]: E1013 04:59:28.950704 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:30.456437 systemd[1]: Reload requested from client PID 2608 ('systemctl') (unit session-7.scope)... Oct 13 04:59:30.456454 systemd[1]: Reloading... Oct 13 04:59:30.520416 zram_generator::config[2654]: No configuration found. Oct 13 04:59:30.781640 systemd[1]: Reloading finished in 324 ms. Oct 13 04:59:30.797696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:30.817439 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 04:59:30.817683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:30.817736 systemd[1]: kubelet.service: Consumed 1.211s CPU time, 128.9M memory peak. Oct 13 04:59:30.819318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 04:59:30.949397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 04:59:30.953703 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 04:59:30.989557 kubelet[2694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:59:30.989557 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 04:59:30.989557 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 04:59:30.989873 kubelet[2694]: I1013 04:59:30.989626 2694 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 04:59:30.997284 kubelet[2694]: I1013 04:59:30.996502 2694 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 04:59:30.997284 kubelet[2694]: I1013 04:59:30.996529 2694 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 04:59:30.997284 kubelet[2694]: I1013 04:59:30.996760 2694 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 04:59:30.998036 kubelet[2694]: I1013 04:59:30.998000 2694 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 04:59:31.000295 kubelet[2694]: I1013 04:59:31.000272 2694 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 04:59:31.003761 kubelet[2694]: I1013 04:59:31.003741 2694 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 04:59:31.007395 kubelet[2694]: I1013 04:59:31.007372 2694 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 04:59:31.007703 kubelet[2694]: I1013 04:59:31.007665 2694 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 04:59:31.007880 kubelet[2694]: I1013 04:59:31.007705 2694 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 04:59:31.007966 kubelet[2694]: I1013 04:59:31.007891 2694 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 04:59:31.007966 kubelet[2694]: I1013 04:59:31.007901 2694 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 04:59:31.007966 kubelet[2694]: I1013 04:59:31.007965 2694 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:59:31.008105 kubelet[2694]: I1013 04:59:31.008092 2694 kubelet.go:446] "Attempting to sync node with API server" Oct 13 04:59:31.008135 kubelet[2694]: I1013 04:59:31.008110 2694 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 04:59:31.008135 kubelet[2694]: I1013 04:59:31.008129 2694 kubelet.go:352] "Adding apiserver pod source" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.008138 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.009140 2694 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.009589 2694 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.009967 2694 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.009994 2694 server.go:1287] "Started kubelet" Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.010798 2694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 04:59:31.011099 kubelet[2694]: I1013 04:59:31.011106 2694 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 04:59:31.011308 kubelet[2694]: I1013 04:59:31.011174 2694 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 04:59:31.011791 kubelet[2694]: I1013 04:59:31.011769 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 04:59:31.012738 kubelet[2694]: I1013 04:59:31.012702 2694 server.go:479] "Adding debug handlers to kubelet server" Oct 13 04:59:31.013965 kubelet[2694]: E1013 04:59:31.013932 2694 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 04:59:31.014028 kubelet[2694]: I1013 04:59:31.013994 2694 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 04:59:31.014193 kubelet[2694]: I1013 04:59:31.014164 2694 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 04:59:31.014336 kubelet[2694]: I1013 04:59:31.014317 2694 reconciler.go:26] "Reconciler: start to sync state" Oct 13 04:59:31.016578 kubelet[2694]: I1013 04:59:31.016546 2694 factory.go:221] Registration of the systemd container factory successfully Oct 13 04:59:31.018267 kubelet[2694]: I1013 04:59:31.016651 2694 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 04:59:31.018513 kubelet[2694]: I1013 04:59:31.018487 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 04:59:31.025216 kubelet[2694]: I1013 04:59:31.025162 2694 factory.go:221] Registration of the containerd container factory successfully Oct 13 04:59:31.048923 kubelet[2694]: I1013 04:59:31.048772 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 04:59:31.051169 kubelet[2694]: I1013 04:59:31.050873 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 04:59:31.051169 kubelet[2694]: I1013 04:59:31.050910 2694 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 04:59:31.051169 kubelet[2694]: I1013 04:59:31.051079 2694 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 04:59:31.051169 kubelet[2694]: I1013 04:59:31.051091 2694 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 04:59:31.051586 kubelet[2694]: E1013 04:59:31.051134 2694 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 04:59:31.077411 kubelet[2694]: I1013 04:59:31.077384 2694 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 04:59:31.077411 kubelet[2694]: I1013 04:59:31.077403 2694 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 04:59:31.077532 kubelet[2694]: I1013 04:59:31.077423 2694 state_mem.go:36] "Initialized new in-memory state store" Oct 13 04:59:31.077582 kubelet[2694]: I1013 04:59:31.077559 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 04:59:31.077619 kubelet[2694]: I1013 04:59:31.077576 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 04:59:31.077639 kubelet[2694]: I1013 04:59:31.077619 2694 policy_none.go:49] "None policy: Start" Oct 13 04:59:31.077639 kubelet[2694]: I1013 04:59:31.077627 2694 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 04:59:31.077639 kubelet[2694]: I1013 04:59:31.077636 2694 state_mem.go:35] "Initializing new in-memory state store" Oct 13 04:59:31.077745 kubelet[2694]: I1013 04:59:31.077733 2694 state_mem.go:75] "Updated machine memory state" Oct 13 04:59:31.082157 kubelet[2694]: I1013 04:59:31.082131 2694 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 04:59:31.082331 kubelet[2694]: I1013 04:59:31.082301 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 04:59:31.082373 kubelet[2694]: I1013 04:59:31.082317 2694 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 04:59:31.083019 kubelet[2694]: I1013 04:59:31.082877 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 04:59:31.085654 kubelet[2694]: E1013 04:59:31.085620 2694 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 04:59:31.153048 kubelet[2694]: I1013 04:59:31.152869 2694 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:31.153048 kubelet[2694]: I1013 04:59:31.152937 2694 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 04:59:31.153512 kubelet[2694]: I1013 04:59:31.153294 2694 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.186754 kubelet[2694]: I1013 04:59:31.186713 2694 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 04:59:31.195334 kubelet[2694]: I1013 04:59:31.195037 2694 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 04:59:31.195434 kubelet[2694]: I1013 04:59:31.195211 2694 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 04:59:31.216038 kubelet[2694]: I1013 04:59:31.216009 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:31.216151 kubelet[2694]: I1013 04:59:31.216045 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:31.216151 kubelet[2694]: I1013 04:59:31.216071 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.216151 kubelet[2694]: I1013 04:59:31.216088 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.216151 kubelet[2694]: I1013 04:59:31.216104 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 04:59:31.216151 kubelet[2694]: I1013 04:59:31.216119 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b984ba4a9272739b1c8843a7556dc47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b984ba4a9272739b1c8843a7556dc47\") " pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:31.216301 kubelet[2694]: I1013 04:59:31.216141 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.216301 kubelet[2694]: I1013 04:59:31.216157 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.216301 kubelet[2694]: I1013 04:59:31.216174 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:31.459471 kubelet[2694]: E1013 04:59:31.459369 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:31.460979 kubelet[2694]: E1013 04:59:31.460873 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:31.461113 kubelet[2694]: E1013 04:59:31.461073 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:32.009069 kubelet[2694]: I1013 04:59:32.009025 2694 apiserver.go:52] "Watching apiserver" Oct 13 04:59:32.014485 kubelet[2694]: I1013 04:59:32.014461 2694 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 04:59:32.067438 kubelet[2694]: I1013 04:59:32.067341 2694 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:32.067524 kubelet[2694]: I1013 04:59:32.067449 2694 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:32.068533 kubelet[2694]: E1013 04:59:32.068474 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:32.072442 kubelet[2694]: E1013 04:59:32.072309 2694 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 04:59:32.072553 kubelet[2694]: E1013 04:59:32.072537 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:32.073280 kubelet[2694]: E1013 04:59:32.073099 2694 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 04:59:32.073280 kubelet[2694]: E1013 04:59:32.073222 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:32.094612 kubelet[2694]: I1013 04:59:32.094565 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.094551573 podStartE2EDuration="1.094551573s" podCreationTimestamp="2025-10-13 04:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:59:32.094512622 +0000 UTC m=+1.137742805" watchObservedRunningTime="2025-10-13 04:59:32.094551573 +0000 UTC m=+1.137781756" Oct 13 04:59:32.094727 kubelet[2694]: I1013 04:59:32.094656 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.094652773 podStartE2EDuration="1.094652773s" podCreationTimestamp="2025-10-13 04:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:59:32.087714569 +0000 UTC m=+1.130944832" watchObservedRunningTime="2025-10-13 04:59:32.094652773 +0000 UTC m=+1.137882916" Oct 13 04:59:32.111162 kubelet[2694]: I1013 04:59:32.111085 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.11103496 podStartE2EDuration="1.11103496s" podCreationTimestamp="2025-10-13 04:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:59:32.104029703 +0000 UTC m=+1.147259886" watchObservedRunningTime="2025-10-13 04:59:32.11103496 +0000 UTC m=+1.154265143" Oct 13 04:59:33.068442 kubelet[2694]: E1013 04:59:33.068323 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:33.068442 kubelet[2694]: E1013 04:59:33.068386 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:33.068785 kubelet[2694]: E1013 04:59:33.068578 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:34.069509 kubelet[2694]: E1013 04:59:34.069457 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:34.069993 kubelet[2694]: E1013 04:59:34.069499 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:35.606064 kubelet[2694]: I1013 04:59:35.606023 2694 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 04:59:35.606468 containerd[1566]: time="2025-10-13T04:59:35.606445116Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 04:59:35.606705 kubelet[2694]: I1013 04:59:35.606644 2694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 04:59:36.457884 systemd[1]: Created slice kubepods-besteffort-podfa409ae2_e723_45dd_b7da_009268e04327.slice - libcontainer container kubepods-besteffort-podfa409ae2_e723_45dd_b7da_009268e04327.slice. Oct 13 04:59:36.551033 kubelet[2694]: I1013 04:59:36.550960 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa409ae2-e723-45dd-b7da-009268e04327-kube-proxy\") pod \"kube-proxy-mh84k\" (UID: \"fa409ae2-e723-45dd-b7da-009268e04327\") " pod="kube-system/kube-proxy-mh84k" Oct 13 04:59:36.551033 kubelet[2694]: I1013 04:59:36.551025 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2nd\" (UniqueName: \"kubernetes.io/projected/fa409ae2-e723-45dd-b7da-009268e04327-kube-api-access-rd2nd\") pod \"kube-proxy-mh84k\" (UID: \"fa409ae2-e723-45dd-b7da-009268e04327\") " pod="kube-system/kube-proxy-mh84k" Oct 13 04:59:36.551203 kubelet[2694]: I1013 04:59:36.551061 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa409ae2-e723-45dd-b7da-009268e04327-xtables-lock\") pod \"kube-proxy-mh84k\" (UID: \"fa409ae2-e723-45dd-b7da-009268e04327\") " pod="kube-system/kube-proxy-mh84k" Oct 13 04:59:36.551203 kubelet[2694]: I1013 04:59:36.551090 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa409ae2-e723-45dd-b7da-009268e04327-lib-modules\") pod \"kube-proxy-mh84k\" (UID: \"fa409ae2-e723-45dd-b7da-009268e04327\") " pod="kube-system/kube-proxy-mh84k" Oct 13 04:59:36.768645 kubelet[2694]: E1013 04:59:36.768517 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:36.770055 containerd[1566]: time="2025-10-13T04:59:36.770020659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mh84k,Uid:fa409ae2-e723-45dd-b7da-009268e04327,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:36.778657 systemd[1]: Created slice kubepods-besteffort-pod6d1236be_37be_476f_94eb_a07bb5e46752.slice - libcontainer container kubepods-besteffort-pod6d1236be_37be_476f_94eb_a07bb5e46752.slice. Oct 13 04:59:36.792279 containerd[1566]: time="2025-10-13T04:59:36.791758965Z" level=info msg="connecting to shim 8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648" address="unix:///run/containerd/s/679fe8b36474efcb4061a0b84d1bd02a300c06021228f2c65c3be10abaf37f12" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:36.822419 systemd[1]: Started cri-containerd-8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648.scope - libcontainer container 8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648. Oct 13 04:59:36.842815 containerd[1566]: time="2025-10-13T04:59:36.842779743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mh84k,Uid:fa409ae2-e723-45dd-b7da-009268e04327,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648\"" Oct 13 04:59:36.843419 kubelet[2694]: E1013 04:59:36.843398 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:36.846164 containerd[1566]: time="2025-10-13T04:59:36.846124846Z" level=info msg="CreateContainer within sandbox \"8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 04:59:36.853782 kubelet[2694]: I1013 04:59:36.853726 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmb9p\" (UniqueName: \"kubernetes.io/projected/6d1236be-37be-476f-94eb-a07bb5e46752-kube-api-access-fmb9p\") pod \"tigera-operator-755d956888-2sp55\" (UID: \"6d1236be-37be-476f-94eb-a07bb5e46752\") " pod="tigera-operator/tigera-operator-755d956888-2sp55" Oct 13 04:59:36.853782 kubelet[2694]: I1013 04:59:36.853775 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d1236be-37be-476f-94eb-a07bb5e46752-var-lib-calico\") pod \"tigera-operator-755d956888-2sp55\" (UID: \"6d1236be-37be-476f-94eb-a07bb5e46752\") " pod="tigera-operator/tigera-operator-755d956888-2sp55" Oct 13 04:59:36.854465 containerd[1566]: time="2025-10-13T04:59:36.854432219Z" level=info msg="Container e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:36.861555 containerd[1566]: time="2025-10-13T04:59:36.861503729Z" level=info msg="CreateContainer within sandbox \"8fac3107e60bc73c5ad660337aee6f6635de2bb727bf59bad6e26df53dc6a648\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144\"" Oct 13 04:59:36.863365 containerd[1566]: time="2025-10-13T04:59:36.863338630Z" level=info msg="StartContainer for \"e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144\"" Oct 13 04:59:36.868933 containerd[1566]: time="2025-10-13T04:59:36.868895493Z" level=info msg="connecting to shim e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144" address="unix:///run/containerd/s/679fe8b36474efcb4061a0b84d1bd02a300c06021228f2c65c3be10abaf37f12" protocol=ttrpc version=3 Oct 13 04:59:36.890417 systemd[1]: Started cri-containerd-e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144.scope - libcontainer container e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144. Oct 13 04:59:36.921397 containerd[1566]: time="2025-10-13T04:59:36.921356698Z" level=info msg="StartContainer for \"e7bfd7eb0d6aae187b6882218b8119a7a883b6ee178a545833e5f6b518341144\" returns successfully" Oct 13 04:59:37.078206 kubelet[2694]: E1013 04:59:37.078111 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:37.083806 containerd[1566]: time="2025-10-13T04:59:37.083765520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-2sp55,Uid:6d1236be-37be-476f-94eb-a07bb5e46752,Namespace:tigera-operator,Attempt:0,}" Oct 13 04:59:37.090276 kubelet[2694]: I1013 04:59:37.089678 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mh84k" podStartSLOduration=1.089662518 podStartE2EDuration="1.089662518s" podCreationTimestamp="2025-10-13 04:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 04:59:37.089334631 +0000 UTC m=+6.132564815" watchObservedRunningTime="2025-10-13 04:59:37.089662518 +0000 UTC m=+6.132892701" Oct 13 04:59:37.102745 containerd[1566]: time="2025-10-13T04:59:37.102649252Z" level=info msg="connecting to shim cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf" address="unix:///run/containerd/s/b812f41d7117e62f351d3bd154a3845ae8e3ebc4f97ad7312cdc86b07d827e59" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:37.125449 systemd[1]: Started cri-containerd-cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf.scope - libcontainer container cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf. Oct 13 04:59:37.155149 containerd[1566]: time="2025-10-13T04:59:37.155099564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-2sp55,Uid:6d1236be-37be-476f-94eb-a07bb5e46752,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf\"" Oct 13 04:59:37.158002 containerd[1566]: time="2025-10-13T04:59:37.157973848Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 04:59:37.245183 kubelet[2694]: E1013 04:59:37.245142 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:38.081404 kubelet[2694]: E1013 04:59:38.081211 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:38.246115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111341667.mount: Deactivated successfully. Oct 13 04:59:38.592269 containerd[1566]: time="2025-10-13T04:59:38.592204228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:38.592671 containerd[1566]: time="2025-10-13T04:59:38.592619444Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 04:59:38.593592 containerd[1566]: time="2025-10-13T04:59:38.593554229Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:38.595585 containerd[1566]: time="2025-10-13T04:59:38.595549609Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:38.596190 containerd[1566]: time="2025-10-13T04:59:38.596149196Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.438138401s" Oct 13 04:59:38.596218 containerd[1566]: time="2025-10-13T04:59:38.596186823Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 04:59:38.600014 containerd[1566]: time="2025-10-13T04:59:38.599408276Z" level=info msg="CreateContainer within sandbox \"cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 04:59:38.607584 containerd[1566]: time="2025-10-13T04:59:38.607547269Z" level=info msg="Container f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:38.610419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055656604.mount: Deactivated successfully. Oct 13 04:59:38.614079 containerd[1566]: time="2025-10-13T04:59:38.613954069Z" level=info msg="CreateContainer within sandbox \"cc6e7c5e683d7b7970a256911c3d31406b401a7b0dd37469cb786774e3060dcf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16\"" Oct 13 04:59:38.614495 containerd[1566]: time="2025-10-13T04:59:38.614462951Z" level=info msg="StartContainer for \"f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16\"" Oct 13 04:59:38.615736 containerd[1566]: time="2025-10-13T04:59:38.615694908Z" level=info msg="connecting to shim f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16" address="unix:///run/containerd/s/b812f41d7117e62f351d3bd154a3845ae8e3ebc4f97ad7312cdc86b07d827e59" protocol=ttrpc version=3 Oct 13 04:59:38.638438 systemd[1]: Started cri-containerd-f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16.scope - libcontainer container f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16. Oct 13 04:59:38.670026 containerd[1566]: time="2025-10-13T04:59:38.669990113Z" level=info msg="StartContainer for \"f0907e1d06963de5c7cb8d9de5905963889f888370b2e6326470cd7a8393fb16\" returns successfully" Oct 13 04:59:39.084863 kubelet[2694]: E1013 04:59:39.084748 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:39.095900 kubelet[2694]: I1013 04:59:39.095835 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-2sp55" podStartSLOduration=1.6535699830000001 podStartE2EDuration="3.095818425s" podCreationTimestamp="2025-10-13 04:59:36 +0000 UTC" firstStartedPulling="2025-10-13 04:59:37.156161123 +0000 UTC m=+6.199391306" lastFinishedPulling="2025-10-13 04:59:38.598409565 +0000 UTC m=+7.641639748" observedRunningTime="2025-10-13 04:59:39.095625375 +0000 UTC m=+8.138855558" watchObservedRunningTime="2025-10-13 04:59:39.095818425 +0000 UTC m=+8.139048609" Oct 13 04:59:41.111122 kubelet[2694]: E1013 04:59:41.111086 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:42.089918 kubelet[2694]: E1013 04:59:42.089609 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:42.104034 kubelet[2694]: E1013 04:59:42.103996 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:43.091863 kubelet[2694]: E1013 04:59:43.091453 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:44.156941 sudo[1767]: pam_unix(sudo:session): session closed for user root Oct 13 04:59:44.158703 sshd[1766]: Connection closed by 10.0.0.1 port 36942 Oct 13 04:59:44.161631 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Oct 13 04:59:44.165958 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:36942.service: Deactivated successfully. Oct 13 04:59:44.168073 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 04:59:44.170293 systemd[1]: session-7.scope: Consumed 6.453s CPU time, 218.3M memory peak. Oct 13 04:59:44.171344 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Oct 13 04:59:44.173205 systemd-logind[1533]: Removed session 7. Oct 13 04:59:46.866649 update_engine[1538]: I20251013 04:59:46.866585 1538 update_attempter.cc:509] Updating boot flags... Oct 13 04:59:47.840304 systemd[1]: Created slice kubepods-besteffort-pod9ef9065c_6b6c_4ac6_a582_a0c0a55d3a77.slice - libcontainer container kubepods-besteffort-pod9ef9065c_6b6c_4ac6_a582_a0c0a55d3a77.slice. Oct 13 04:59:47.922893 kubelet[2694]: I1013 04:59:47.922847 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnpd\" (UniqueName: \"kubernetes.io/projected/9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77-kube-api-access-bgnpd\") pod \"calico-typha-698845f857-wsbm4\" (UID: \"9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77\") " pod="calico-system/calico-typha-698845f857-wsbm4" Oct 13 04:59:47.923237 kubelet[2694]: I1013 04:59:47.923102 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77-tigera-ca-bundle\") pod \"calico-typha-698845f857-wsbm4\" (UID: \"9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77\") " pod="calico-system/calico-typha-698845f857-wsbm4" Oct 13 04:59:47.923337 kubelet[2694]: I1013 04:59:47.923238 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77-typha-certs\") pod \"calico-typha-698845f857-wsbm4\" (UID: \"9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77\") " pod="calico-system/calico-typha-698845f857-wsbm4" Oct 13 04:59:48.146204 kubelet[2694]: E1013 04:59:48.145736 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:48.147401 containerd[1566]: time="2025-10-13T04:59:48.147349930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698845f857-wsbm4,Uid:9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:48.167425 systemd[1]: Created slice kubepods-besteffort-podfc8d77c7_1d04_4486_9f95_8feb3abc2230.slice - libcontainer container kubepods-besteffort-podfc8d77c7_1d04_4486_9f95_8feb3abc2230.slice. Oct 13 04:59:48.202853 containerd[1566]: time="2025-10-13T04:59:48.202807293Z" level=info msg="connecting to shim ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda" address="unix:///run/containerd/s/48b1e2b1e84c0c252777a12308a3e9b1b11cd454662a3d9457ab22a5b630c674" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:48.225633 kubelet[2694]: I1013 04:59:48.224863 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-policysync\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225633 kubelet[2694]: I1013 04:59:48.224909 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-var-lib-calico\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225633 kubelet[2694]: I1013 04:59:48.224925 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-var-run-calico\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225633 kubelet[2694]: I1013 04:59:48.224941 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc8d77c7-1d04-4486-9f95-8feb3abc2230-node-certs\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225633 kubelet[2694]: I1013 04:59:48.224958 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8d77c7-1d04-4486-9f95-8feb3abc2230-tigera-ca-bundle\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225845 kubelet[2694]: I1013 04:59:48.225391 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9xq\" (UniqueName: \"kubernetes.io/projected/fc8d77c7-1d04-4486-9f95-8feb3abc2230-kube-api-access-lx9xq\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225845 kubelet[2694]: I1013 04:59:48.225427 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-cni-log-dir\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225845 kubelet[2694]: I1013 04:59:48.225451 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-cni-net-dir\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225845 kubelet[2694]: I1013 04:59:48.225470 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-cni-bin-dir\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225845 kubelet[2694]: I1013 04:59:48.225488 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-flexvol-driver-host\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225945 kubelet[2694]: I1013 04:59:48.225506 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-lib-modules\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.225945 kubelet[2694]: I1013 04:59:48.225522 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc8d77c7-1d04-4486-9f95-8feb3abc2230-xtables-lock\") pod \"calico-node-kk2lj\" (UID: \"fc8d77c7-1d04-4486-9f95-8feb3abc2230\") " pod="calico-system/calico-node-kk2lj" Oct 13 04:59:48.247499 systemd[1]: Started cri-containerd-ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda.scope - libcontainer container ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda. Oct 13 04:59:48.298674 containerd[1566]: time="2025-10-13T04:59:48.298564645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698845f857-wsbm4,Uid:9ef9065c-6b6c-4ac6-a582-a0c0a55d3a77,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda\"" Oct 13 04:59:48.300112 kubelet[2694]: E1013 04:59:48.300072 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:48.307877 containerd[1566]: time="2025-10-13T04:59:48.307842226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 04:59:48.330888 kubelet[2694]: E1013 04:59:48.330848 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.330888 kubelet[2694]: W1013 04:59:48.330872 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.330888 kubelet[2694]: E1013 04:59:48.330892 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.341110 kubelet[2694]: E1013 04:59:48.341080 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.341110 kubelet[2694]: W1013 04:59:48.341103 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.341249 kubelet[2694]: E1013 04:59:48.341125 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.448567 kubelet[2694]: E1013 04:59:48.448442 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwwff" podUID="7838801e-a5fa-47ff-b132-bd072f0992f1" Oct 13 04:59:48.473386 containerd[1566]: time="2025-10-13T04:59:48.473323460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kk2lj,Uid:fc8d77c7-1d04-4486-9f95-8feb3abc2230,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:48.514798 containerd[1566]: time="2025-10-13T04:59:48.514755285Z" level=info msg="connecting to shim 8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13" address="unix:///run/containerd/s/008645ec42616e6ef4bfb7a7952f0ecac3ade270443f5b44869c0134d6c695b7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 04:59:48.516725 kubelet[2694]: E1013 04:59:48.516700 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.516725 kubelet[2694]: W1013 04:59:48.516724 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.516820 kubelet[2694]: E1013 04:59:48.516747 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517016 kubelet[2694]: E1013 04:59:48.517002 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517080 kubelet[2694]: W1013 04:59:48.517013 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517080 kubelet[2694]: E1013 04:59:48.517078 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517237 kubelet[2694]: E1013 04:59:48.517220 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517237 kubelet[2694]: W1013 04:59:48.517232 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517763 kubelet[2694]: E1013 04:59:48.517242 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517763 kubelet[2694]: E1013 04:59:48.517431 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517763 kubelet[2694]: W1013 04:59:48.517442 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517763 kubelet[2694]: E1013 04:59:48.517452 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517763 kubelet[2694]: E1013 04:59:48.517618 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517763 kubelet[2694]: W1013 04:59:48.517627 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517763 kubelet[2694]: E1013 04:59:48.517637 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517910 kubelet[2694]: E1013 04:59:48.517770 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517910 kubelet[2694]: W1013 04:59:48.517778 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517910 kubelet[2694]: E1013 04:59:48.517786 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.517971 kubelet[2694]: E1013 04:59:48.517936 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.517971 kubelet[2694]: W1013 04:59:48.517943 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.517971 kubelet[2694]: E1013 04:59:48.517950 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518136 kubelet[2694]: E1013 04:59:48.518123 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518136 kubelet[2694]: W1013 04:59:48.518135 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518202 kubelet[2694]: E1013 04:59:48.518145 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518316 kubelet[2694]: E1013 04:59:48.518303 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518316 kubelet[2694]: W1013 04:59:48.518314 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518392 kubelet[2694]: E1013 04:59:48.518323 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518474 kubelet[2694]: E1013 04:59:48.518462 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518474 kubelet[2694]: W1013 04:59:48.518472 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518544 kubelet[2694]: E1013 04:59:48.518480 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518613 kubelet[2694]: E1013 04:59:48.518602 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518613 kubelet[2694]: W1013 04:59:48.518611 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518678 kubelet[2694]: E1013 04:59:48.518620 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518746 kubelet[2694]: E1013 04:59:48.518735 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518746 kubelet[2694]: W1013 04:59:48.518745 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518812 kubelet[2694]: E1013 04:59:48.518752 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.518899 kubelet[2694]: E1013 04:59:48.518887 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.518899 kubelet[2694]: W1013 04:59:48.518898 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.518960 kubelet[2694]: E1013 04:59:48.518906 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519047 kubelet[2694]: E1013 04:59:48.519035 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519047 kubelet[2694]: W1013 04:59:48.519046 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519117 kubelet[2694]: E1013 04:59:48.519053 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519180 kubelet[2694]: E1013 04:59:48.519168 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519180 kubelet[2694]: W1013 04:59:48.519178 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519246 kubelet[2694]: E1013 04:59:48.519186 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519326 kubelet[2694]: E1013 04:59:48.519313 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519326 kubelet[2694]: W1013 04:59:48.519323 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519398 kubelet[2694]: E1013 04:59:48.519331 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519489 kubelet[2694]: E1013 04:59:48.519477 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519489 kubelet[2694]: W1013 04:59:48.519487 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519566 kubelet[2694]: E1013 04:59:48.519495 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519625 kubelet[2694]: E1013 04:59:48.519609 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519625 kubelet[2694]: W1013 04:59:48.519618 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519625 kubelet[2694]: E1013 04:59:48.519626 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519753 kubelet[2694]: E1013 04:59:48.519740 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519753 kubelet[2694]: W1013 04:59:48.519749 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519826 kubelet[2694]: E1013 04:59:48.519757 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.519886 kubelet[2694]: E1013 04:59:48.519870 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.519886 kubelet[2694]: W1013 04:59:48.519880 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.519886 kubelet[2694]: E1013 04:59:48.519886 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.527413 kubelet[2694]: E1013 04:59:48.527230 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.527413 kubelet[2694]: W1013 04:59:48.527248 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.527413 kubelet[2694]: E1013 04:59:48.527291 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.527413 kubelet[2694]: I1013 04:59:48.527313 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7838801e-a5fa-47ff-b132-bd072f0992f1-kubelet-dir\") pod \"csi-node-driver-kwwff\" (UID: \"7838801e-a5fa-47ff-b132-bd072f0992f1\") " pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:48.527661 kubelet[2694]: E1013 04:59:48.527629 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.527661 kubelet[2694]: W1013 04:59:48.527646 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.527784 kubelet[2694]: E1013 04:59:48.527725 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.527784 kubelet[2694]: I1013 04:59:48.527749 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7838801e-a5fa-47ff-b132-bd072f0992f1-varrun\") pod \"csi-node-driver-kwwff\" (UID: \"7838801e-a5fa-47ff-b132-bd072f0992f1\") " pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:48.528133 kubelet[2694]: E1013 04:59:48.528029 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.528133 kubelet[2694]: W1013 04:59:48.528043 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.528133 kubelet[2694]: E1013 04:59:48.528060 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.528133 kubelet[2694]: I1013 04:59:48.528081 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjjz\" (UniqueName: \"kubernetes.io/projected/7838801e-a5fa-47ff-b132-bd072f0992f1-kube-api-access-zfjjz\") pod \"csi-node-driver-kwwff\" (UID: \"7838801e-a5fa-47ff-b132-bd072f0992f1\") " pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:48.528564 kubelet[2694]: E1013 04:59:48.528467 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.528564 kubelet[2694]: W1013 04:59:48.528482 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.528564 kubelet[2694]: E1013 04:59:48.528499 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.528564 kubelet[2694]: I1013 04:59:48.528520 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7838801e-a5fa-47ff-b132-bd072f0992f1-registration-dir\") pod \"csi-node-driver-kwwff\" (UID: \"7838801e-a5fa-47ff-b132-bd072f0992f1\") " pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:48.528918 kubelet[2694]: E1013 04:59:48.528906 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.528974 kubelet[2694]: W1013 04:59:48.528965 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.529065 kubelet[2694]: E1013 04:59:48.529038 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.529104 kubelet[2694]: I1013 04:59:48.529083 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7838801e-a5fa-47ff-b132-bd072f0992f1-socket-dir\") pod \"csi-node-driver-kwwff\" (UID: \"7838801e-a5fa-47ff-b132-bd072f0992f1\") " pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:48.529336 kubelet[2694]: E1013 04:59:48.529282 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.529336 kubelet[2694]: W1013 04:59:48.529294 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.529336 kubelet[2694]: E1013 04:59:48.529325 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.529593 kubelet[2694]: E1013 04:59:48.529581 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.529746 kubelet[2694]: W1013 04:59:48.529652 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.529746 kubelet[2694]: E1013 04:59:48.529688 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.529868 kubelet[2694]: E1013 04:59:48.529858 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.529945 kubelet[2694]: W1013 04:59:48.529933 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.530034 kubelet[2694]: E1013 04:59:48.529999 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.530241 kubelet[2694]: E1013 04:59:48.530215 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.530241 kubelet[2694]: W1013 04:59:48.530227 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.530418 kubelet[2694]: E1013 04:59:48.530293 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.530677 kubelet[2694]: E1013 04:59:48.530622 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.530677 kubelet[2694]: W1013 04:59:48.530636 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.530677 kubelet[2694]: E1013 04:59:48.530666 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.530974 kubelet[2694]: E1013 04:59:48.530894 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.530974 kubelet[2694]: W1013 04:59:48.530906 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.530974 kubelet[2694]: E1013 04:59:48.530916 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.531338 kubelet[2694]: E1013 04:59:48.531325 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.531413 kubelet[2694]: W1013 04:59:48.531400 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.531480 kubelet[2694]: E1013 04:59:48.531469 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.531748 kubelet[2694]: E1013 04:59:48.531737 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.531815 kubelet[2694]: W1013 04:59:48.531804 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.531870 kubelet[2694]: E1013 04:59:48.531860 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.532097 kubelet[2694]: E1013 04:59:48.532086 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.532163 kubelet[2694]: W1013 04:59:48.532152 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.532219 kubelet[2694]: E1013 04:59:48.532209 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.532507 kubelet[2694]: E1013 04:59:48.532469 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.532507 kubelet[2694]: W1013 04:59:48.532481 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.532507 kubelet[2694]: E1013 04:59:48.532491 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.547445 systemd[1]: Started cri-containerd-8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13.scope - libcontainer container 8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13. Oct 13 04:59:48.576813 containerd[1566]: time="2025-10-13T04:59:48.576737832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kk2lj,Uid:fc8d77c7-1d04-4486-9f95-8feb3abc2230,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\"" Oct 13 04:59:48.630571 kubelet[2694]: E1013 04:59:48.630542 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.630571 kubelet[2694]: W1013 04:59:48.630567 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.630797 kubelet[2694]: E1013 04:59:48.630588 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.631177 kubelet[2694]: E1013 04:59:48.631150 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.631177 kubelet[2694]: W1013 04:59:48.631170 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.631236 kubelet[2694]: E1013 04:59:48.631204 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.631486 kubelet[2694]: E1013 04:59:48.631461 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.631486 kubelet[2694]: W1013 04:59:48.631484 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.631549 kubelet[2694]: E1013 04:59:48.631532 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.631738 kubelet[2694]: E1013 04:59:48.631722 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.631738 kubelet[2694]: W1013 04:59:48.631734 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.631827 kubelet[2694]: E1013 04:59:48.631749 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.631913 kubelet[2694]: E1013 04:59:48.631901 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.631913 kubelet[2694]: W1013 04:59:48.631913 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.631990 kubelet[2694]: E1013 04:59:48.631947 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.632050 kubelet[2694]: E1013 04:59:48.632039 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.632050 kubelet[2694]: W1013 04:59:48.632049 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.632197 kubelet[2694]: E1013 04:59:48.632096 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.632197 kubelet[2694]: E1013 04:59:48.632161 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.632359 kubelet[2694]: W1013 04:59:48.632185 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.632359 kubelet[2694]: E1013 04:59:48.632293 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.632681 kubelet[2694]: E1013 04:59:48.632616 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.632681 kubelet[2694]: W1013 04:59:48.632628 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.632681 kubelet[2694]: E1013 04:59:48.632645 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.632964 kubelet[2694]: E1013 04:59:48.632917 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.632964 kubelet[2694]: W1013 04:59:48.632929 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.633277 kubelet[2694]: E1013 04:59:48.633219 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.633277 kubelet[2694]: W1013 04:59:48.633231 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.633277 kubelet[2694]: E1013 04:59:48.633269 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.633447 kubelet[2694]: E1013 04:59:48.633019 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.633731 kubelet[2694]: E1013 04:59:48.633661 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.633731 kubelet[2694]: W1013 04:59:48.633682 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.633731 kubelet[2694]: E1013 04:59:48.633726 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.634123 kubelet[2694]: E1013 04:59:48.634097 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.634186 kubelet[2694]: W1013 04:59:48.634110 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.634273 kubelet[2694]: E1013 04:59:48.634244 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.634485 kubelet[2694]: E1013 04:59:48.634464 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.634608 kubelet[2694]: W1013 04:59:48.634520 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.634608 kubelet[2694]: E1013 04:59:48.634548 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.634812 kubelet[2694]: E1013 04:59:48.634800 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.634881 kubelet[2694]: W1013 04:59:48.634870 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.635010 kubelet[2694]: E1013 04:59:48.634953 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.635190 kubelet[2694]: E1013 04:59:48.635178 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.635295 kubelet[2694]: W1013 04:59:48.635240 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.635437 kubelet[2694]: E1013 04:59:48.635402 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.635757 kubelet[2694]: E1013 04:59:48.635708 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.635757 kubelet[2694]: W1013 04:59:48.635720 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.635757 kubelet[2694]: E1013 04:59:48.635748 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.636306 kubelet[2694]: E1013 04:59:48.636291 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.636437 kubelet[2694]: W1013 04:59:48.636340 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.636437 kubelet[2694]: E1013 04:59:48.636387 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.636680 kubelet[2694]: E1013 04:59:48.636632 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.636739 kubelet[2694]: W1013 04:59:48.636729 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.636830 kubelet[2694]: E1013 04:59:48.636819 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.637078 kubelet[2694]: E1013 04:59:48.637045 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.637078 kubelet[2694]: W1013 04:59:48.637056 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.637200 kubelet[2694]: E1013 04:59:48.637186 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.637469 kubelet[2694]: E1013 04:59:48.637443 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.637566 kubelet[2694]: W1013 04:59:48.637531 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.637708 kubelet[2694]: E1013 04:59:48.637640 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.637910 kubelet[2694]: E1013 04:59:48.637897 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.638149 kubelet[2694]: W1013 04:59:48.638074 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.638149 kubelet[2694]: E1013 04:59:48.638121 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.638573 kubelet[2694]: E1013 04:59:48.638553 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.638726 kubelet[2694]: W1013 04:59:48.638634 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.638815 kubelet[2694]: E1013 04:59:48.638797 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.639065 kubelet[2694]: E1013 04:59:48.639052 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.639220 kubelet[2694]: W1013 04:59:48.639112 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.639313 kubelet[2694]: E1013 04:59:48.639298 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.639616 kubelet[2694]: E1013 04:59:48.639536 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.639616 kubelet[2694]: W1013 04:59:48.639549 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.639616 kubelet[2694]: E1013 04:59:48.639560 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.639902 kubelet[2694]: E1013 04:59:48.639889 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.639983 kubelet[2694]: W1013 04:59:48.639970 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.640135 kubelet[2694]: E1013 04:59:48.640057 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:48.646247 kubelet[2694]: E1013 04:59:48.646226 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:48.646390 kubelet[2694]: W1013 04:59:48.646335 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:48.646390 kubelet[2694]: E1013 04:59:48.646358 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:49.199042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753569104.mount: Deactivated successfully. Oct 13 04:59:50.051930 kubelet[2694]: E1013 04:59:50.051878 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwwff" podUID="7838801e-a5fa-47ff-b132-bd072f0992f1" Oct 13 04:59:50.814428 containerd[1566]: time="2025-10-13T04:59:50.814374503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:50.814890 containerd[1566]: time="2025-10-13T04:59:50.814842722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 04:59:50.815642 containerd[1566]: time="2025-10-13T04:59:50.815612776Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:50.817493 containerd[1566]: time="2025-10-13T04:59:50.817445155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:50.818179 containerd[1566]: time="2025-10-13T04:59:50.817990243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.509852453s" Oct 13 04:59:50.818179 containerd[1566]: time="2025-10-13T04:59:50.818022576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 04:59:50.819021 containerd[1566]: time="2025-10-13T04:59:50.818971338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 04:59:50.830185 containerd[1566]: time="2025-10-13T04:59:50.829841046Z" level=info msg="CreateContainer within sandbox \"ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 04:59:50.838279 containerd[1566]: time="2025-10-13T04:59:50.837579440Z" level=info msg="Container dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:50.844078 containerd[1566]: time="2025-10-13T04:59:50.844044307Z" level=info msg="CreateContainer within sandbox \"ff2b0f6f5dbd679e3098079af0b821eb0e26b92ec33340b6eec6f41122fbbeda\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a\"" Oct 13 04:59:50.844768 containerd[1566]: time="2025-10-13T04:59:50.844749336Z" level=info msg="StartContainer for \"dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a\"" Oct 13 04:59:50.845975 containerd[1566]: time="2025-10-13T04:59:50.845949834Z" level=info msg="connecting to shim dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a" address="unix:///run/containerd/s/48b1e2b1e84c0c252777a12308a3e9b1b11cd454662a3d9457ab22a5b630c674" protocol=ttrpc version=3 Oct 13 04:59:50.864406 systemd[1]: Started cri-containerd-dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a.scope - libcontainer container dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a. Oct 13 04:59:50.901562 containerd[1566]: time="2025-10-13T04:59:50.901527325Z" level=info msg="StartContainer for \"dd66235b59ef9d4f67d7c845dabb1107768f5472aa0d5b01524e0e04f1e4a91a\" returns successfully" Oct 13 04:59:51.123127 kubelet[2694]: E1013 04:59:51.122747 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:51.140388 kubelet[2694]: E1013 04:59:51.140351 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.140388 kubelet[2694]: W1013 04:59:51.140377 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.140388 kubelet[2694]: E1013 04:59:51.140398 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.141504 kubelet[2694]: E1013 04:59:51.141090 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.141504 kubelet[2694]: W1013 04:59:51.141106 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.141504 kubelet[2694]: E1013 04:59:51.141151 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.141504 kubelet[2694]: E1013 04:59:51.141322 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.141504 kubelet[2694]: W1013 04:59:51.141332 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.141504 kubelet[2694]: E1013 04:59:51.141343 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.142376 kubelet[2694]: E1013 04:59:51.142339 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.142376 kubelet[2694]: W1013 04:59:51.142359 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.142376 kubelet[2694]: E1013 04:59:51.142373 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.142676 kubelet[2694]: E1013 04:59:51.142654 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.142676 kubelet[2694]: W1013 04:59:51.142669 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.142740 kubelet[2694]: E1013 04:59:51.142680 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.142857 kubelet[2694]: E1013 04:59:51.142841 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.142857 kubelet[2694]: W1013 04:59:51.142854 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.142919 kubelet[2694]: E1013 04:59:51.142865 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.143020 kubelet[2694]: E1013 04:59:51.143006 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.143020 kubelet[2694]: W1013 04:59:51.143018 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.143096 kubelet[2694]: E1013 04:59:51.143028 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.143998 kubelet[2694]: E1013 04:59:51.143956 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.143998 kubelet[2694]: W1013 04:59:51.143985 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.143998 kubelet[2694]: E1013 04:59:51.143998 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.144502 kubelet[2694]: E1013 04:59:51.144216 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.144502 kubelet[2694]: W1013 04:59:51.144226 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.144502 kubelet[2694]: E1013 04:59:51.144236 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.144502 kubelet[2694]: E1013 04:59:51.144377 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.144502 kubelet[2694]: W1013 04:59:51.144385 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.144502 kubelet[2694]: E1013 04:59:51.144394 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.144794 kubelet[2694]: E1013 04:59:51.144766 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.144794 kubelet[2694]: W1013 04:59:51.144782 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.144794 kubelet[2694]: E1013 04:59:51.144793 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.145063 kubelet[2694]: E1013 04:59:51.145042 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.145063 kubelet[2694]: W1013 04:59:51.145055 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.145063 kubelet[2694]: E1013 04:59:51.145065 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.145792 kubelet[2694]: E1013 04:59:51.145764 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.146038 kubelet[2694]: W1013 04:59:51.145936 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.146038 kubelet[2694]: E1013 04:59:51.145960 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.147007 kubelet[2694]: I1013 04:59:51.146929 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-698845f857-wsbm4" podStartSLOduration=1.630799234 podStartE2EDuration="4.146915488s" podCreationTimestamp="2025-10-13 04:59:47 +0000 UTC" firstStartedPulling="2025-10-13 04:59:48.302704426 +0000 UTC m=+17.345934609" lastFinishedPulling="2025-10-13 04:59:50.81882068 +0000 UTC m=+19.862050863" observedRunningTime="2025-10-13 04:59:51.146316189 +0000 UTC m=+20.189546372" watchObservedRunningTime="2025-10-13 04:59:51.146915488 +0000 UTC m=+20.190145671" Oct 13 04:59:51.148638 kubelet[2694]: E1013 04:59:51.148608 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.148638 kubelet[2694]: W1013 04:59:51.148632 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.148748 kubelet[2694]: E1013 04:59:51.148647 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.149038 kubelet[2694]: E1013 04:59:51.149018 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.149038 kubelet[2694]: W1013 04:59:51.149032 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.149119 kubelet[2694]: E1013 04:59:51.149045 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.151357 kubelet[2694]: E1013 04:59:51.151329 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.151357 kubelet[2694]: W1013 04:59:51.151350 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.151466 kubelet[2694]: E1013 04:59:51.151364 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.152157 kubelet[2694]: E1013 04:59:51.152134 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.152157 kubelet[2694]: W1013 04:59:51.152152 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.152247 kubelet[2694]: E1013 04:59:51.152166 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.152421 kubelet[2694]: E1013 04:59:51.152399 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.152421 kubelet[2694]: W1013 04:59:51.152414 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.152500 kubelet[2694]: E1013 04:59:51.152433 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.153029 kubelet[2694]: E1013 04:59:51.153001 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.153029 kubelet[2694]: W1013 04:59:51.153018 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.153118 kubelet[2694]: E1013 04:59:51.153034 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.154410 kubelet[2694]: E1013 04:59:51.154340 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.154410 kubelet[2694]: W1013 04:59:51.154358 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.154517 kubelet[2694]: E1013 04:59:51.154416 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.154591 kubelet[2694]: E1013 04:59:51.154562 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.154591 kubelet[2694]: W1013 04:59:51.154575 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.154706 kubelet[2694]: E1013 04:59:51.154633 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.155405 kubelet[2694]: E1013 04:59:51.155364 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.155405 kubelet[2694]: W1013 04:59:51.155385 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.155716 kubelet[2694]: E1013 04:59:51.155574 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.155716 kubelet[2694]: W1013 04:59:51.155588 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.155716 kubelet[2694]: E1013 04:59:51.155601 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.155716 kubelet[2694]: E1013 04:59:51.155617 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.155815 kubelet[2694]: E1013 04:59:51.155803 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.155815 kubelet[2694]: W1013 04:59:51.155813 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.155856 kubelet[2694]: E1013 04:59:51.155823 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.158177 kubelet[2694]: E1013 04:59:51.158035 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.158177 kubelet[2694]: W1013 04:59:51.158054 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.158177 kubelet[2694]: E1013 04:59:51.158078 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.158359 kubelet[2694]: E1013 04:59:51.158347 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.158409 kubelet[2694]: W1013 04:59:51.158399 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.158581 kubelet[2694]: E1013 04:59:51.158554 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.159270 kubelet[2694]: E1013 04:59:51.158677 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.159270 kubelet[2694]: W1013 04:59:51.158686 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.159391 kubelet[2694]: E1013 04:59:51.159376 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.160380 kubelet[2694]: E1013 04:59:51.160347 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.160380 kubelet[2694]: W1013 04:59:51.160364 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.160589 kubelet[2694]: E1013 04:59:51.160574 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.161538 kubelet[2694]: E1013 04:59:51.161471 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.161538 kubelet[2694]: W1013 04:59:51.161489 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.161645 kubelet[2694]: E1013 04:59:51.161632 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.162498 kubelet[2694]: E1013 04:59:51.162467 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.162498 kubelet[2694]: W1013 04:59:51.162483 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.163384 kubelet[2694]: E1013 04:59:51.163296 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.163677 kubelet[2694]: E1013 04:59:51.163649 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.163677 kubelet[2694]: W1013 04:59:51.163664 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.163824 kubelet[2694]: E1013 04:59:51.163804 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.164162 kubelet[2694]: E1013 04:59:51.164149 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.164239 kubelet[2694]: W1013 04:59:51.164220 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.164889 kubelet[2694]: E1013 04:59:51.164875 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 04:59:51.164993 kubelet[2694]: W1013 04:59:51.164949 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 04:59:51.164993 kubelet[2694]: E1013 04:59:51.164966 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.165070 kubelet[2694]: E1013 04:59:51.165059 2694 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 04:59:51.694352 containerd[1566]: time="2025-10-13T04:59:51.694307660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:51.694792 containerd[1566]: time="2025-10-13T04:59:51.694763746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 04:59:51.695894 containerd[1566]: time="2025-10-13T04:59:51.695847781Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:51.698266 containerd[1566]: time="2025-10-13T04:59:51.697622227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:51.698266 containerd[1566]: time="2025-10-13T04:59:51.698137414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 879.132263ms" Oct 13 04:59:51.698266 containerd[1566]: time="2025-10-13T04:59:51.698165904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 04:59:51.701385 containerd[1566]: time="2025-10-13T04:59:51.701349583Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 04:59:51.714298 containerd[1566]: time="2025-10-13T04:59:51.714244677Z" level=info msg="Container 3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:51.720956 containerd[1566]: time="2025-10-13T04:59:51.720871289Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\"" Oct 13 04:59:51.721403 containerd[1566]: time="2025-10-13T04:59:51.721379514Z" level=info msg="StartContainer for \"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\"" Oct 13 04:59:51.723261 containerd[1566]: time="2025-10-13T04:59:51.723224626Z" level=info msg="connecting to shim 3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319" address="unix:///run/containerd/s/008645ec42616e6ef4bfb7a7952f0ecac3ade270443f5b44869c0134d6c695b7" protocol=ttrpc version=3 Oct 13 04:59:51.753445 systemd[1]: Started cri-containerd-3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319.scope - libcontainer container 3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319. Oct 13 04:59:51.823554 containerd[1566]: time="2025-10-13T04:59:51.823490363Z" level=info msg="StartContainer for \"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\" returns successfully" Oct 13 04:59:51.839957 systemd[1]: cri-containerd-3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319.scope: Deactivated successfully. Oct 13 04:59:51.840313 systemd[1]: cri-containerd-3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319.scope: Consumed 38ms CPU time, 6.6M memory peak, 1.4M read from disk. Oct 13 04:59:51.861384 containerd[1566]: time="2025-10-13T04:59:51.861329216Z" level=info msg="received exit event container_id:\"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\" id:\"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\" pid:3400 exited_at:{seconds:1760331591 nanos:855701688}" Oct 13 04:59:51.861536 containerd[1566]: time="2025-10-13T04:59:51.861497758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\" id:\"3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319\" pid:3400 exited_at:{seconds:1760331591 nanos:855701688}" Oct 13 04:59:51.897761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3206294ceff93c6f2059bb8c2cfbc1cc2eb55d92fb394e0e21de45ac473dd319-rootfs.mount: Deactivated successfully. Oct 13 04:59:52.052114 kubelet[2694]: E1013 04:59:52.051971 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwwff" podUID="7838801e-a5fa-47ff-b132-bd072f0992f1" Oct 13 04:59:52.127347 kubelet[2694]: E1013 04:59:52.126496 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:52.129572 containerd[1566]: time="2025-10-13T04:59:52.129397954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 04:59:53.127957 kubelet[2694]: E1013 04:59:53.127915 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:54.051604 kubelet[2694]: E1013 04:59:54.051503 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kwwff" podUID="7838801e-a5fa-47ff-b132-bd072f0992f1" Oct 13 04:59:54.812519 containerd[1566]: time="2025-10-13T04:59:54.812469220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:54.813129 containerd[1566]: time="2025-10-13T04:59:54.812942930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 04:59:54.814306 containerd[1566]: time="2025-10-13T04:59:54.814240582Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:54.816333 containerd[1566]: time="2025-10-13T04:59:54.816303356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:54.817301 containerd[1566]: time="2025-10-13T04:59:54.817270343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.687820931s" Oct 13 04:59:54.817346 containerd[1566]: time="2025-10-13T04:59:54.817306115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 04:59:54.819192 containerd[1566]: time="2025-10-13T04:59:54.819084319Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 04:59:54.835654 containerd[1566]: time="2025-10-13T04:59:54.835585797Z" level=info msg="Container 7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:54.849741 containerd[1566]: time="2025-10-13T04:59:54.849686232Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\"" Oct 13 04:59:54.850413 containerd[1566]: time="2025-10-13T04:59:54.850381733Z" level=info msg="StartContainer for \"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\"" Oct 13 04:59:54.851760 containerd[1566]: time="2025-10-13T04:59:54.851734962Z" level=info msg="connecting to shim 7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f" address="unix:///run/containerd/s/008645ec42616e6ef4bfb7a7952f0ecac3ade270443f5b44869c0134d6c695b7" protocol=ttrpc version=3 Oct 13 04:59:54.873697 systemd[1]: Started cri-containerd-7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f.scope - libcontainer container 7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f. Oct 13 04:59:54.909445 containerd[1566]: time="2025-10-13T04:59:54.909409308Z" level=info msg="StartContainer for \"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\" returns successfully" Oct 13 04:59:55.512162 systemd[1]: cri-containerd-7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f.scope: Deactivated successfully. Oct 13 04:59:55.512530 systemd[1]: cri-containerd-7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f.scope: Consumed 443ms CPU time, 175.5M memory peak, 2.5M read from disk, 165.8M written to disk. Oct 13 04:59:55.514029 containerd[1566]: time="2025-10-13T04:59:55.513870905Z" level=info msg="received exit event container_id:\"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\" id:\"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\" pid:3460 exited_at:{seconds:1760331595 nanos:513426410}" Oct 13 04:59:55.514029 containerd[1566]: time="2025-10-13T04:59:55.513994663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\" id:\"7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f\" pid:3460 exited_at:{seconds:1760331595 nanos:513426410}" Oct 13 04:59:55.532437 kubelet[2694]: I1013 04:59:55.532403 2694 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 04:59:55.536752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee852d275cc539624b5a12b015eb76b01945e31757c1ce2859bbda58025565f-rootfs.mount: Deactivated successfully. Oct 13 04:59:55.584306 kubelet[2694]: I1013 04:59:55.584222 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jl6s\" (UniqueName: \"kubernetes.io/projected/0872fe33-606a-4835-9cbd-9c42a21686ae-kube-api-access-7jl6s\") pod \"coredns-668d6bf9bc-xnbfg\" (UID: \"0872fe33-606a-4835-9cbd-9c42a21686ae\") " pod="kube-system/coredns-668d6bf9bc-xnbfg" Oct 13 04:59:55.584655 kubelet[2694]: I1013 04:59:55.584515 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qztg\" (UniqueName: \"kubernetes.io/projected/2c61a005-e9cb-4c80-b87f-0ab572e03b5f-kube-api-access-2qztg\") pod \"coredns-668d6bf9bc-2q7gq\" (UID: \"2c61a005-e9cb-4c80-b87f-0ab572e03b5f\") " pod="kube-system/coredns-668d6bf9bc-2q7gq" Oct 13 04:59:55.584655 kubelet[2694]: I1013 04:59:55.584687 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0872fe33-606a-4835-9cbd-9c42a21686ae-config-volume\") pod \"coredns-668d6bf9bc-xnbfg\" (UID: \"0872fe33-606a-4835-9cbd-9c42a21686ae\") " pod="kube-system/coredns-668d6bf9bc-xnbfg" Oct 13 04:59:55.584655 kubelet[2694]: I1013 04:59:55.584711 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c61a005-e9cb-4c80-b87f-0ab572e03b5f-config-volume\") pod \"coredns-668d6bf9bc-2q7gq\" (UID: \"2c61a005-e9cb-4c80-b87f-0ab572e03b5f\") " pod="kube-system/coredns-668d6bf9bc-2q7gq" Oct 13 04:59:55.585562 systemd[1]: Created slice kubepods-burstable-pod0872fe33_606a_4835_9cbd_9c42a21686ae.slice - libcontainer container kubepods-burstable-pod0872fe33_606a_4835_9cbd_9c42a21686ae.slice. Oct 13 04:59:55.596941 systemd[1]: Created slice kubepods-burstable-pod2c61a005_e9cb_4c80_b87f_0ab572e03b5f.slice - libcontainer container kubepods-burstable-pod2c61a005_e9cb_4c80_b87f_0ab572e03b5f.slice. Oct 13 04:59:55.609814 systemd[1]: Created slice kubepods-besteffort-pod935f00e5_97ae_4f08_a4d5_43a47eb3b9e5.slice - libcontainer container kubepods-besteffort-pod935f00e5_97ae_4f08_a4d5_43a47eb3b9e5.slice. Oct 13 04:59:55.617914 systemd[1]: Created slice kubepods-besteffort-podd0623c40_6d2f_4c7a_804a_d03c33bb837b.slice - libcontainer container kubepods-besteffort-podd0623c40_6d2f_4c7a_804a_d03c33bb837b.slice. Oct 13 04:59:55.624187 systemd[1]: Created slice kubepods-besteffort-podcea21309_0fb4_43ab_bd45_b6e3c15908fb.slice - libcontainer container kubepods-besteffort-podcea21309_0fb4_43ab_bd45_b6e3c15908fb.slice. Oct 13 04:59:55.633659 systemd[1]: Created slice kubepods-besteffort-podef76c406_b08d_4bb9_bfd4_3aa87a0fadf2.slice - libcontainer container kubepods-besteffort-podef76c406_b08d_4bb9_bfd4_3aa87a0fadf2.slice. Oct 13 04:59:55.638348 systemd[1]: Created slice kubepods-besteffort-podee621bdf_5dda_43d7_8bab_48a4b972e452.slice - libcontainer container kubepods-besteffort-podee621bdf_5dda_43d7_8bab_48a4b972e452.slice. Oct 13 04:59:55.686278 kubelet[2694]: I1013 04:59:55.685168 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cea21309-0fb4-43ab-bd45-b6e3c15908fb-config\") pod \"goldmane-54d579b49d-qkkhc\" (UID: \"cea21309-0fb4-43ab-bd45-b6e3c15908fb\") " pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:55.686278 kubelet[2694]: I1013 04:59:55.685240 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-backend-key-pair\") pod \"whisker-5c4dd84778-7nbpb\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " pod="calico-system/whisker-5c4dd84778-7nbpb" Oct 13 04:59:55.686278 kubelet[2694]: I1013 04:59:55.685296 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0623c40-6d2f-4c7a-804a-d03c33bb837b-tigera-ca-bundle\") pod \"calico-kube-controllers-78cd7576bc-wbmnx\" (UID: \"d0623c40-6d2f-4c7a-804a-d03c33bb837b\") " pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" Oct 13 04:59:55.686278 kubelet[2694]: I1013 04:59:55.685315 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cea21309-0fb4-43ab-bd45-b6e3c15908fb-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qkkhc\" (UID: \"cea21309-0fb4-43ab-bd45-b6e3c15908fb\") " pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:55.686278 kubelet[2694]: I1013 04:59:55.685334 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzttz\" (UniqueName: \"kubernetes.io/projected/935f00e5-97ae-4f08-a4d5-43a47eb3b9e5-kube-api-access-vzttz\") pod \"calico-apiserver-558944944-bmpbn\" (UID: \"935f00e5-97ae-4f08-a4d5-43a47eb3b9e5\") " pod="calico-apiserver/calico-apiserver-558944944-bmpbn" Oct 13 04:59:55.686504 kubelet[2694]: I1013 04:59:55.685350 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-ca-bundle\") pod \"whisker-5c4dd84778-7nbpb\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " pod="calico-system/whisker-5c4dd84778-7nbpb" Oct 13 04:59:55.686504 kubelet[2694]: I1013 04:59:55.685367 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhb4l\" (UniqueName: \"kubernetes.io/projected/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-kube-api-access-hhb4l\") pod \"whisker-5c4dd84778-7nbpb\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " pod="calico-system/whisker-5c4dd84778-7nbpb" Oct 13 04:59:55.686504 kubelet[2694]: I1013 04:59:55.685387 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cea21309-0fb4-43ab-bd45-b6e3c15908fb-goldmane-key-pair\") pod \"goldmane-54d579b49d-qkkhc\" (UID: \"cea21309-0fb4-43ab-bd45-b6e3c15908fb\") " pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:55.686504 kubelet[2694]: I1013 04:59:55.685404 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9nv\" (UniqueName: \"kubernetes.io/projected/cea21309-0fb4-43ab-bd45-b6e3c15908fb-kube-api-access-ps9nv\") pod \"goldmane-54d579b49d-qkkhc\" (UID: \"cea21309-0fb4-43ab-bd45-b6e3c15908fb\") " pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:55.686504 kubelet[2694]: I1013 04:59:55.685424 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/935f00e5-97ae-4f08-a4d5-43a47eb3b9e5-calico-apiserver-certs\") pod \"calico-apiserver-558944944-bmpbn\" (UID: \"935f00e5-97ae-4f08-a4d5-43a47eb3b9e5\") " pod="calico-apiserver/calico-apiserver-558944944-bmpbn" Oct 13 04:59:55.686605 kubelet[2694]: I1013 04:59:55.685439 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7zm\" (UniqueName: \"kubernetes.io/projected/d0623c40-6d2f-4c7a-804a-d03c33bb837b-kube-api-access-cc7zm\") pod \"calico-kube-controllers-78cd7576bc-wbmnx\" (UID: \"d0623c40-6d2f-4c7a-804a-d03c33bb837b\") " pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" Oct 13 04:59:55.787241 kubelet[2694]: I1013 04:59:55.786474 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee621bdf-5dda-43d7-8bab-48a4b972e452-calico-apiserver-certs\") pod \"calico-apiserver-558944944-f5mh9\" (UID: \"ee621bdf-5dda-43d7-8bab-48a4b972e452\") " pod="calico-apiserver/calico-apiserver-558944944-f5mh9" Oct 13 04:59:55.787241 kubelet[2694]: I1013 04:59:55.786585 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stm74\" (UniqueName: \"kubernetes.io/projected/ee621bdf-5dda-43d7-8bab-48a4b972e452-kube-api-access-stm74\") pod \"calico-apiserver-558944944-f5mh9\" (UID: \"ee621bdf-5dda-43d7-8bab-48a4b972e452\") " pod="calico-apiserver/calico-apiserver-558944944-f5mh9" Oct 13 04:59:55.890329 kubelet[2694]: E1013 04:59:55.890236 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:55.893968 containerd[1566]: time="2025-10-13T04:59:55.892925409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnbfg,Uid:0872fe33-606a-4835-9cbd-9c42a21686ae,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:55.902156 kubelet[2694]: E1013 04:59:55.902130 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 04:59:55.903962 containerd[1566]: time="2025-10-13T04:59:55.903919989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2q7gq,Uid:2c61a005-e9cb-4c80-b87f-0ab572e03b5f,Namespace:kube-system,Attempt:0,}" Oct 13 04:59:55.914624 containerd[1566]: time="2025-10-13T04:59:55.914589390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-bmpbn,Uid:935f00e5-97ae-4f08-a4d5-43a47eb3b9e5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 04:59:55.922010 containerd[1566]: time="2025-10-13T04:59:55.921977354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cd7576bc-wbmnx,Uid:d0623c40-6d2f-4c7a-804a-d03c33bb837b,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:55.929507 containerd[1566]: time="2025-10-13T04:59:55.929296057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qkkhc,Uid:cea21309-0fb4-43ab-bd45-b6e3c15908fb,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:55.938940 containerd[1566]: time="2025-10-13T04:59:55.938908297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4dd84778-7nbpb,Uid:ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:55.943385 containerd[1566]: time="2025-10-13T04:59:55.943335402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-f5mh9,Uid:ee621bdf-5dda-43d7-8bab-48a4b972e452,Namespace:calico-apiserver,Attempt:0,}" Oct 13 04:59:56.019320 containerd[1566]: time="2025-10-13T04:59:56.019271878Z" level=error msg="Failed to destroy network for sandbox \"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.028503 containerd[1566]: time="2025-10-13T04:59:56.028453630Z" level=error msg="Failed to destroy network for sandbox \"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.029611 containerd[1566]: time="2025-10-13T04:59:56.029579237Z" level=error msg="Failed to destroy network for sandbox \"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.039096 containerd[1566]: time="2025-10-13T04:59:56.038236196Z" level=error msg="Failed to destroy network for sandbox \"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.039384 containerd[1566]: time="2025-10-13T04:59:56.039243490Z" level=error msg="Failed to destroy network for sandbox \"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.042534 containerd[1566]: time="2025-10-13T04:59:56.042505759Z" level=error msg="Failed to destroy network for sandbox \"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.048931 containerd[1566]: time="2025-10-13T04:59:56.048873252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-bmpbn,Uid:935f00e5-97ae-4f08-a4d5-43a47eb3b9e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.055399 containerd[1566]: time="2025-10-13T04:59:56.055363940Z" level=error msg="Failed to destroy network for sandbox \"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.055732 kubelet[2694]: E1013 04:59:56.055691 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.055797 kubelet[2694]: E1013 04:59:56.055767 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-558944944-bmpbn" Oct 13 04:59:56.055823 kubelet[2694]: E1013 04:59:56.055793 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-558944944-bmpbn" Oct 13 04:59:56.056513 kubelet[2694]: E1013 04:59:56.056482 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-558944944-bmpbn_calico-apiserver(935f00e5-97ae-4f08-a4d5-43a47eb3b9e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-558944944-bmpbn_calico-apiserver(935f00e5-97ae-4f08-a4d5-43a47eb3b9e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"869cc086eae555a5c47c2d512bf419a324a629a01e5f09b6fdbe5d068c1a11fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-558944944-bmpbn" podUID="935f00e5-97ae-4f08-a4d5-43a47eb3b9e5" Oct 13 04:59:56.058182 containerd[1566]: time="2025-10-13T04:59:56.058142029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnbfg,Uid:0872fe33-606a-4835-9cbd-9c42a21686ae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.058619 kubelet[2694]: E1013 04:59:56.058500 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.058619 kubelet[2694]: E1013 04:59:56.058547 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xnbfg" Oct 13 04:59:56.058619 kubelet[2694]: E1013 04:59:56.058564 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xnbfg" Oct 13 04:59:56.058720 kubelet[2694]: E1013 04:59:56.058591 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xnbfg_kube-system(0872fe33-606a-4835-9cbd-9c42a21686ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xnbfg_kube-system(0872fe33-606a-4835-9cbd-9c42a21686ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c26fe60c34260c029b9e8bd7afc0c32fe7005e7a3f666c698475bfc7ce5929\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xnbfg" podUID="0872fe33-606a-4835-9cbd-9c42a21686ae" Oct 13 04:59:56.058735 systemd[1]: Created slice kubepods-besteffort-pod7838801e_a5fa_47ff_b132_bd072f0992f1.slice - libcontainer container kubepods-besteffort-pod7838801e_a5fa_47ff_b132_bd072f0992f1.slice. Oct 13 04:59:56.059614 containerd[1566]: time="2025-10-13T04:59:56.059404476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qkkhc,Uid:cea21309-0fb4-43ab-bd45-b6e3c15908fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.059714 kubelet[2694]: E1013 04:59:56.059557 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.059714 kubelet[2694]: E1013 04:59:56.059588 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:56.059714 kubelet[2694]: E1013 04:59:56.059603 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qkkhc" Oct 13 04:59:56.059797 kubelet[2694]: E1013 04:59:56.059634 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qkkhc_calico-system(cea21309-0fb4-43ab-bd45-b6e3c15908fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qkkhc_calico-system(cea21309-0fb4-43ab-bd45-b6e3c15908fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e01da58667d2965808a9e0d1494859d144577da7e6616fe19658ea87b3616f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qkkhc" podUID="cea21309-0fb4-43ab-bd45-b6e3c15908fb" Oct 13 04:59:56.061448 containerd[1566]: time="2025-10-13T04:59:56.061415581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwwff,Uid:7838801e-a5fa-47ff-b132-bd072f0992f1,Namespace:calico-system,Attempt:0,}" Oct 13 04:59:56.061967 containerd[1566]: time="2025-10-13T04:59:56.061885078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2q7gq,Uid:2c61a005-e9cb-4c80-b87f-0ab572e03b5f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.062087 kubelet[2694]: E1013 04:59:56.062026 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.062087 kubelet[2694]: E1013 04:59:56.062067 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2q7gq" Oct 13 04:59:56.062087 kubelet[2694]: E1013 04:59:56.062082 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2q7gq" Oct 13 04:59:56.062436 kubelet[2694]: E1013 04:59:56.062116 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2q7gq_kube-system(2c61a005-e9cb-4c80-b87f-0ab572e03b5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2q7gq_kube-system(2c61a005-e9cb-4c80-b87f-0ab572e03b5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a56c3446e2ecd207c25f2b7e6b16c8338e48def8856ddf57587b88b5b25e740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2q7gq" podUID="2c61a005-e9cb-4c80-b87f-0ab572e03b5f" Oct 13 04:59:56.063663 containerd[1566]: time="2025-10-13T04:59:56.063467778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cd7576bc-wbmnx,Uid:d0623c40-6d2f-4c7a-804a-d03c33bb837b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.064036 kubelet[2694]: E1013 04:59:56.063998 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.064082 kubelet[2694]: E1013 04:59:56.064048 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" Oct 13 04:59:56.064110 kubelet[2694]: E1013 04:59:56.064080 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" Oct 13 04:59:56.064131 kubelet[2694]: E1013 04:59:56.064115 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78cd7576bc-wbmnx_calico-system(d0623c40-6d2f-4c7a-804a-d03c33bb837b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78cd7576bc-wbmnx_calico-system(d0623c40-6d2f-4c7a-804a-d03c33bb837b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a92d7856e575b83bcfdbe5dcdcf4428a359ec8692fbe4197f4c96d70ebbee9e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" podUID="d0623c40-6d2f-4c7a-804a-d03c33bb837b" Oct 13 04:59:56.067997 containerd[1566]: time="2025-10-13T04:59:56.067953564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-f5mh9,Uid:ee621bdf-5dda-43d7-8bab-48a4b972e452,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.068332 kubelet[2694]: E1013 04:59:56.068291 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.068385 kubelet[2694]: E1013 04:59:56.068345 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-558944944-f5mh9" Oct 13 04:59:56.068385 kubelet[2694]: E1013 04:59:56.068361 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-558944944-f5mh9" Oct 13 04:59:56.068433 kubelet[2694]: E1013 04:59:56.068402 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-558944944-f5mh9_calico-apiserver(ee621bdf-5dda-43d7-8bab-48a4b972e452)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-558944944-f5mh9_calico-apiserver(ee621bdf-5dda-43d7-8bab-48a4b972e452)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0475e7948dc3e1da83906e01fe972dd3d609c064745a0d6801fbf7b68e9c4288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-558944944-f5mh9" podUID="ee621bdf-5dda-43d7-8bab-48a4b972e452" Oct 13 04:59:56.071101 containerd[1566]: time="2025-10-13T04:59:56.071049064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c4dd84778-7nbpb,Uid:ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.071349 kubelet[2694]: E1013 04:59:56.071213 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.071349 kubelet[2694]: E1013 04:59:56.071322 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4dd84778-7nbpb" Oct 13 04:59:56.071349 kubelet[2694]: E1013 04:59:56.071349 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c4dd84778-7nbpb" Oct 13 04:59:56.071459 kubelet[2694]: E1013 04:59:56.071384 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c4dd84778-7nbpb_calico-system(ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c4dd84778-7nbpb_calico-system(ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"282c67a159cc7c366ac3ecec34731722a31c6ddb0b43c493fad15f2048a164ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c4dd84778-7nbpb" podUID="ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2" Oct 13 04:59:56.118346 containerd[1566]: time="2025-10-13T04:59:56.118295292Z" level=error msg="Failed to destroy network for sandbox \"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.120327 containerd[1566]: time="2025-10-13T04:59:56.120283391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwwff,Uid:7838801e-a5fa-47ff-b132-bd072f0992f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.120537 kubelet[2694]: E1013 04:59:56.120497 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 04:59:56.120582 kubelet[2694]: E1013 04:59:56.120562 2694 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:56.120628 kubelet[2694]: E1013 04:59:56.120581 2694 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kwwff" Oct 13 04:59:56.120899 kubelet[2694]: E1013 04:59:56.120641 2694 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kwwff_calico-system(7838801e-a5fa-47ff-b132-bd072f0992f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kwwff_calico-system(7838801e-a5fa-47ff-b132-bd072f0992f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e57f06af0609a8e6acc85c03f7b677eb5bc04299a178c91edfb8e7fc80992d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kwwff" podUID="7838801e-a5fa-47ff-b132-bd072f0992f1" Oct 13 04:59:56.140196 containerd[1566]: time="2025-10-13T04:59:56.140077030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 04:59:56.837125 systemd[1]: run-netns-cni\x2d09d4989f\x2dd32a\x2df1db\x2dfae8\x2dc705d5cdf7d8.mount: Deactivated successfully. Oct 13 04:59:56.837414 systemd[1]: run-netns-cni\x2d5f323c0a\x2d99be\x2df9cb\x2d5d6e\x2df06f4a3cc67b.mount: Deactivated successfully. Oct 13 04:59:58.985706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518279528.mount: Deactivated successfully. Oct 13 04:59:59.204936 containerd[1566]: time="2025-10-13T04:59:59.204877741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:59.206445 containerd[1566]: time="2025-10-13T04:59:59.206402013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 04:59:59.207398 containerd[1566]: time="2025-10-13T04:59:59.207360420Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:59.209190 containerd[1566]: time="2025-10-13T04:59:59.209162163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 04:59:59.210012 containerd[1566]: time="2025-10-13T04:59:59.209594154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.069383766s" Oct 13 04:59:59.210012 containerd[1566]: time="2025-10-13T04:59:59.209623162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 04:59:59.219099 containerd[1566]: time="2025-10-13T04:59:59.219065991Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 04:59:59.235888 containerd[1566]: time="2025-10-13T04:59:59.235781450Z" level=info msg="Container 3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09: CDI devices from CRI Config.CDIDevices: []" Oct 13 04:59:59.243605 containerd[1566]: time="2025-10-13T04:59:59.243560971Z" level=info msg="CreateContainer within sandbox \"8c3a89bcc7d2071e9ca889601b6237fbb358ad15f25d1087248ae4cf3f211b13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\"" Oct 13 04:59:59.244055 containerd[1566]: time="2025-10-13T04:59:59.244024050Z" level=info msg="StartContainer for \"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\"" Oct 13 04:59:59.245737 containerd[1566]: time="2025-10-13T04:59:59.245643827Z" level=info msg="connecting to shim 3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09" address="unix:///run/containerd/s/008645ec42616e6ef4bfb7a7952f0ecac3ade270443f5b44869c0134d6c695b7" protocol=ttrpc version=3 Oct 13 04:59:59.264412 systemd[1]: Started cri-containerd-3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09.scope - libcontainer container 3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09. Oct 13 04:59:59.301116 containerd[1566]: time="2025-10-13T04:59:59.301050399Z" level=info msg="StartContainer for \"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\" returns successfully" Oct 13 04:59:59.422979 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 04:59:59.423089 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 04:59:59.714580 kubelet[2694]: I1013 04:59:59.714512 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-ca-bundle\") pod \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " Oct 13 04:59:59.714580 kubelet[2694]: I1013 04:59:59.714580 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhb4l\" (UniqueName: \"kubernetes.io/projected/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-kube-api-access-hhb4l\") pod \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " Oct 13 04:59:59.715036 kubelet[2694]: I1013 04:59:59.714609 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-backend-key-pair\") pod \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\" (UID: \"ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2\") " Oct 13 04:59:59.723797 kubelet[2694]: I1013 04:59:59.723754 2694 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2" (UID: "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 04:59:59.727016 kubelet[2694]: I1013 04:59:59.726954 2694 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2" (UID: "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 04:59:59.735156 kubelet[2694]: I1013 04:59:59.735116 2694 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-kube-api-access-hhb4l" (OuterVolumeSpecName: "kube-api-access-hhb4l") pod "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2" (UID: "ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2"). InnerVolumeSpecName "kube-api-access-hhb4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 04:59:59.815467 kubelet[2694]: I1013 04:59:59.815422 2694 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 04:59:59.815467 kubelet[2694]: I1013 04:59:59.815459 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhb4l\" (UniqueName: \"kubernetes.io/projected/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-kube-api-access-hhb4l\") on node \"localhost\" DevicePath \"\"" Oct 13 04:59:59.815467 kubelet[2694]: I1013 04:59:59.815468 2694 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 04:59:59.986473 systemd[1]: var-lib-kubelet-pods-ef76c406\x2db08d\x2d4bb9\x2dbfd4\x2d3aa87a0fadf2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhb4l.mount: Deactivated successfully. Oct 13 04:59:59.986568 systemd[1]: var-lib-kubelet-pods-ef76c406\x2db08d\x2d4bb9\x2dbfd4\x2d3aa87a0fadf2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:00:00.157222 systemd[1]: Removed slice kubepods-besteffort-podef76c406_b08d_4bb9_bfd4_3aa87a0fadf2.slice - libcontainer container kubepods-besteffort-podef76c406_b08d_4bb9_bfd4_3aa87a0fadf2.slice. Oct 13 05:00:00.175278 kubelet[2694]: I1013 05:00:00.173716 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kk2lj" podStartSLOduration=1.538382934 podStartE2EDuration="12.17080472s" podCreationTimestamp="2025-10-13 04:59:48 +0000 UTC" firstStartedPulling="2025-10-13 04:59:48.577868627 +0000 UTC m=+17.621098810" lastFinishedPulling="2025-10-13 04:59:59.210290413 +0000 UTC m=+28.253520596" observedRunningTime="2025-10-13 05:00:00.16987409 +0000 UTC m=+29.213104313" watchObservedRunningTime="2025-10-13 05:00:00.17080472 +0000 UTC m=+29.214034903" Oct 13 05:00:00.235397 systemd[1]: Created slice kubepods-besteffort-pod803b3adf_3dc3_4d0e_9eb5_88d4ed7200be.slice - libcontainer container kubepods-besteffort-pod803b3adf_3dc3_4d0e_9eb5_88d4ed7200be.slice. Oct 13 05:00:00.419691 kubelet[2694]: I1013 05:00:00.419616 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdzf\" (UniqueName: \"kubernetes.io/projected/803b3adf-3dc3-4d0e-9eb5-88d4ed7200be-kube-api-access-fzdzf\") pod \"whisker-56bd7d774f-82n6g\" (UID: \"803b3adf-3dc3-4d0e-9eb5-88d4ed7200be\") " pod="calico-system/whisker-56bd7d774f-82n6g" Oct 13 05:00:00.419691 kubelet[2694]: I1013 05:00:00.419662 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/803b3adf-3dc3-4d0e-9eb5-88d4ed7200be-whisker-ca-bundle\") pod \"whisker-56bd7d774f-82n6g\" (UID: \"803b3adf-3dc3-4d0e-9eb5-88d4ed7200be\") " pod="calico-system/whisker-56bd7d774f-82n6g" Oct 13 05:00:00.419691 kubelet[2694]: I1013 05:00:00.419685 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/803b3adf-3dc3-4d0e-9eb5-88d4ed7200be-whisker-backend-key-pair\") pod \"whisker-56bd7d774f-82n6g\" (UID: \"803b3adf-3dc3-4d0e-9eb5-88d4ed7200be\") " pod="calico-system/whisker-56bd7d774f-82n6g" Oct 13 05:00:00.839144 containerd[1566]: time="2025-10-13T05:00:00.838402252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bd7d774f-82n6g,Uid:803b3adf-3dc3-4d0e-9eb5-88d4ed7200be,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:01.033851 systemd-networkd[1466]: cali67f5853df02: Link UP Oct 13 05:00:01.034197 systemd-networkd[1466]: cali67f5853df02: Gained carrier Oct 13 05:00:01.048701 containerd[1566]: 2025-10-13 05:00:00.869 [INFO][3939] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:00:01.048701 containerd[1566]: 2025-10-13 05:00:00.914 [INFO][3939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--56bd7d774f--82n6g-eth0 whisker-56bd7d774f- calico-system 803b3adf-3dc3-4d0e-9eb5-88d4ed7200be 875 0 2025-10-13 05:00:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56bd7d774f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-56bd7d774f-82n6g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali67f5853df02 [] [] }} ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-" Oct 13 05:00:01.048701 containerd[1566]: 2025-10-13 05:00:00.914 [INFO][3939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.048701 containerd[1566]: 2025-10-13 05:00:00.986 [INFO][3984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" HandleID="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Workload="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:00.987 [INFO][3984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" HandleID="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Workload="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-56bd7d774f-82n6g", "timestamp":"2025-10-13 05:00:00.986501605 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:00.987 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:00.987 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:00.987 [INFO][3984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:00.997 [INFO][3984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" host="localhost" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:01.002 [INFO][3984] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:01.006 [INFO][3984] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:01.007 [INFO][3984] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:01.010 [INFO][3984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:01.048975 containerd[1566]: 2025-10-13 05:00:01.010 [INFO][3984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" host="localhost" Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.015 [INFO][3984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8 Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.019 [INFO][3984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" host="localhost" Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.023 [INFO][3984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" host="localhost" Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.023 [INFO][3984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" host="localhost" Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.023 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:01.049174 containerd[1566]: 2025-10-13 05:00:01.023 [INFO][3984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" HandleID="k8s-pod-network.d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Workload="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.049307 containerd[1566]: 2025-10-13 05:00:01.026 [INFO][3939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56bd7d774f--82n6g-eth0", GenerateName:"whisker-56bd7d774f-", Namespace:"calico-system", SelfLink:"", UID:"803b3adf-3dc3-4d0e-9eb5-88d4ed7200be", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bd7d774f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-56bd7d774f-82n6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67f5853df02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:01.049307 containerd[1566]: 2025-10-13 05:00:01.026 [INFO][3939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.049403 containerd[1566]: 2025-10-13 05:00:01.026 [INFO][3939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67f5853df02 ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.049403 containerd[1566]: 2025-10-13 05:00:01.034 [INFO][3939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.049449 containerd[1566]: 2025-10-13 05:00:01.034 [INFO][3939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56bd7d774f--82n6g-eth0", GenerateName:"whisker-56bd7d774f-", Namespace:"calico-system", SelfLink:"", UID:"803b3adf-3dc3-4d0e-9eb5-88d4ed7200be", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 0, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bd7d774f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8", Pod:"whisker-56bd7d774f-82n6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali67f5853df02", MAC:"8a:d6:f6:6f:8b:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:01.049497 containerd[1566]: 2025-10-13 05:00:01.044 [INFO][3939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" Namespace="calico-system" Pod="whisker-56bd7d774f-82n6g" WorkloadEndpoint="localhost-k8s-whisker--56bd7d774f--82n6g-eth0" Oct 13 05:00:01.065400 kubelet[2694]: I1013 05:00:01.063604 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2" path="/var/lib/kubelet/pods/ef76c406-b08d-4bb9-bfd4-3aa87a0fadf2/volumes" Oct 13 05:00:01.171126 systemd-networkd[1466]: vxlan.calico: Link UP Oct 13 05:00:01.171135 systemd-networkd[1466]: vxlan.calico: Gained carrier Oct 13 05:00:01.190367 kubelet[2694]: I1013 05:00:01.190333 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:01.191675 containerd[1566]: time="2025-10-13T05:00:01.191641584Z" level=info msg="connecting to shim d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8" address="unix:///run/containerd/s/3d115aea4440b25dd10b3a8db75f51c562dab73c317e1074977bd7cc3e3147c9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:01.235542 systemd[1]: Started cri-containerd-d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8.scope - libcontainer container d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8. Oct 13 05:00:01.247925 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:01.268179 containerd[1566]: time="2025-10-13T05:00:01.268132996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bd7d774f-82n6g,Uid:803b3adf-3dc3-4d0e-9eb5-88d4ed7200be,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8\"" Oct 13 05:00:01.271138 containerd[1566]: time="2025-10-13T05:00:01.271109704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:00:02.888410 systemd-networkd[1466]: cali67f5853df02: Gained IPv6LL Oct 13 05:00:03.208507 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Oct 13 05:00:07.060263 kubelet[2694]: E1013 05:00:07.060068 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:07.060771 containerd[1566]: time="2025-10-13T05:00:07.060551842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwwff,Uid:7838801e-a5fa-47ff-b132-bd072f0992f1,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:07.060771 containerd[1566]: time="2025-10-13T05:00:07.060558004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnbfg,Uid:0872fe33-606a-4835-9cbd-9c42a21686ae,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:07.183197 systemd-networkd[1466]: cali3ea37494a74: Link UP Oct 13 05:00:07.183449 systemd-networkd[1466]: cali3ea37494a74: Gained carrier Oct 13 05:00:07.200052 containerd[1566]: 2025-10-13 05:00:07.114 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kwwff-eth0 csi-node-driver- calico-system 7838801e-a5fa-47ff-b132-bd072f0992f1 701 0 2025-10-13 04:59:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kwwff eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3ea37494a74 [] [] }} ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-" Oct 13 05:00:07.200052 containerd[1566]: 2025-10-13 05:00:07.115 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200052 containerd[1566]: 2025-10-13 05:00:07.139 [INFO][4160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" HandleID="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Workload="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.139 [INFO][4160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" HandleID="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Workload="localhost-k8s-csi--node--driver--kwwff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b0e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kwwff", "timestamp":"2025-10-13 05:00:07.139552889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.139 [INFO][4160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.139 [INFO][4160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.139 [INFO][4160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.151 [INFO][4160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" host="localhost" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.156 [INFO][4160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.160 [INFO][4160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.162 [INFO][4160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.164 [INFO][4160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:07.200249 containerd[1566]: 2025-10-13 05:00:07.164 [INFO][4160] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" host="localhost" Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.166 [INFO][4160] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25 Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.169 [INFO][4160] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" host="localhost" Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4160] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" host="localhost" Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" host="localhost" Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:07.200518 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" HandleID="k8s-pod-network.be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Workload="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200629 containerd[1566]: 2025-10-13 05:00:07.177 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kwwff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7838801e-a5fa-47ff-b132-bd072f0992f1", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kwwff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ea37494a74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:07.200679 containerd[1566]: 2025-10-13 05:00:07.177 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200679 containerd[1566]: 2025-10-13 05:00:07.177 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ea37494a74 ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200679 containerd[1566]: 2025-10-13 05:00:07.183 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.200748 containerd[1566]: 2025-10-13 05:00:07.183 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kwwff-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7838801e-a5fa-47ff-b132-bd072f0992f1", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25", Pod:"csi-node-driver-kwwff", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ea37494a74", MAC:"86:c0:32:b2:ba:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:07.200795 containerd[1566]: 2025-10-13 05:00:07.194 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" Namespace="calico-system" Pod="csi-node-driver-kwwff" WorkloadEndpoint="localhost-k8s-csi--node--driver--kwwff-eth0" Oct 13 05:00:07.222319 containerd[1566]: time="2025-10-13T05:00:07.222279897Z" level=info msg="connecting to shim be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25" address="unix:///run/containerd/s/d9203a02dedfb0e9881f124aebf459f88ad71c0182a268be2fa6f4b9165e168a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:07.248386 systemd[1]: Started cri-containerd-be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25.scope - libcontainer container be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25. Oct 13 05:00:07.265025 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:07.284344 containerd[1566]: time="2025-10-13T05:00:07.284285976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kwwff,Uid:7838801e-a5fa-47ff-b132-bd072f0992f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25\"" Oct 13 05:00:07.298456 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:37996.service - OpenSSH per-connection server daemon (10.0.0.1:37996). Oct 13 05:00:07.310073 systemd-networkd[1466]: cali53061ecbdca: Link UP Oct 13 05:00:07.310399 systemd-networkd[1466]: cali53061ecbdca: Gained carrier Oct 13 05:00:07.323968 containerd[1566]: 2025-10-13 05:00:07.116 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0 coredns-668d6bf9bc- kube-system 0872fe33-606a-4835-9cbd-9c42a21686ae 805 0 2025-10-13 04:59:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xnbfg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali53061ecbdca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-" Oct 13 05:00:07.323968 containerd[1566]: 2025-10-13 05:00:07.116 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.323968 containerd[1566]: 2025-10-13 05:00:07.143 [INFO][4161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" HandleID="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Workload="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.143 [INFO][4161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" HandleID="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Workload="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xnbfg", "timestamp":"2025-10-13 05:00:07.143064049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.143 [INFO][4161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.174 [INFO][4161] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.253 [INFO][4161] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" host="localhost" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.268 [INFO][4161] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.278 [INFO][4161] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.281 [INFO][4161] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.284 [INFO][4161] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:07.324837 containerd[1566]: 2025-10-13 05:00:07.284 [INFO][4161] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" host="localhost" Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.286 [INFO][4161] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.295 [INFO][4161] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" host="localhost" Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.303 [INFO][4161] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" host="localhost" Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.303 [INFO][4161] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" host="localhost" Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.303 [INFO][4161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:07.325045 containerd[1566]: 2025-10-13 05:00:07.303 [INFO][4161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" HandleID="k8s-pod-network.f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Workload="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.325156 containerd[1566]: 2025-10-13 05:00:07.306 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0872fe33-606a-4835-9cbd-9c42a21686ae", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xnbfg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53061ecbdca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:07.325787 containerd[1566]: 2025-10-13 05:00:07.306 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.325787 containerd[1566]: 2025-10-13 05:00:07.306 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53061ecbdca ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.325787 containerd[1566]: 2025-10-13 05:00:07.310 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.326014 containerd[1566]: 2025-10-13 05:00:07.311 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0872fe33-606a-4835-9cbd-9c42a21686ae", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa", Pod:"coredns-668d6bf9bc-xnbfg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali53061ecbdca", MAC:"ca:83:50:7f:23:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:07.326014 containerd[1566]: 2025-10-13 05:00:07.321 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" Namespace="kube-system" Pod="coredns-668d6bf9bc-xnbfg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xnbfg-eth0" Oct 13 05:00:07.350699 containerd[1566]: time="2025-10-13T05:00:07.350659019Z" level=info msg="connecting to shim f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa" address="unix:///run/containerd/s/9a359438fffa7b5b880a47599d8bdf88fabb10a9daf9f7ec380e1eecd791653b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:07.367238 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 37996 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:07.368479 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:07.375442 systemd[1]: Started cri-containerd-f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa.scope - libcontainer container f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa. Oct 13 05:00:07.380490 systemd-logind[1533]: New session 8 of user core. Oct 13 05:00:07.380527 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:00:07.389587 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:07.407522 containerd[1566]: time="2025-10-13T05:00:07.407470532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xnbfg,Uid:0872fe33-606a-4835-9cbd-9c42a21686ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa\"" Oct 13 05:00:07.408270 kubelet[2694]: E1013 05:00:07.408069 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:07.418927 containerd[1566]: time="2025-10-13T05:00:07.418898063Z" level=info msg="CreateContainer within sandbox \"f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:00:07.426299 containerd[1566]: time="2025-10-13T05:00:07.426250126Z" level=info msg="Container edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:07.432961 containerd[1566]: time="2025-10-13T05:00:07.432911335Z" level=info msg="CreateContainer within sandbox \"f79322c4734f52c8c333f538d4cfbb55e556c4927eb6a7aa7c20056ec3704efa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35\"" Oct 13 05:00:07.436214 containerd[1566]: time="2025-10-13T05:00:07.434374098Z" level=info msg="StartContainer for \"edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35\"" Oct 13 05:00:07.436568 containerd[1566]: time="2025-10-13T05:00:07.436540237Z" level=info msg="connecting to shim edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35" address="unix:///run/containerd/s/9a359438fffa7b5b880a47599d8bdf88fabb10a9daf9f7ec380e1eecd791653b" protocol=ttrpc version=3 Oct 13 05:00:07.456492 systemd[1]: Started cri-containerd-edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35.scope - libcontainer container edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35. Oct 13 05:00:07.547934 containerd[1566]: time="2025-10-13T05:00:07.547892224Z" level=info msg="StartContainer for \"edfa9dc57949f2348e31055df9511a64f5d6c01521d2ffe2bb86416f4edd1a35\" returns successfully" Oct 13 05:00:07.563998 sshd[4286]: Connection closed by 10.0.0.1 port 37996 Oct 13 05:00:07.564315 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:07.568445 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:37996.service: Deactivated successfully. Oct 13 05:00:07.571828 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:00:07.574218 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:00:07.576096 systemd-logind[1533]: Removed session 8. Oct 13 05:00:07.741984 containerd[1566]: time="2025-10-13T05:00:07.741926890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:07.742503 containerd[1566]: time="2025-10-13T05:00:07.742461874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 05:00:07.743410 containerd[1566]: time="2025-10-13T05:00:07.743374450Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:07.745357 containerd[1566]: time="2025-10-13T05:00:07.745330429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:07.746036 containerd[1566]: time="2025-10-13T05:00:07.745995397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 6.474850725s" Oct 13 05:00:07.746036 containerd[1566]: time="2025-10-13T05:00:07.746030204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 05:00:07.747618 containerd[1566]: time="2025-10-13T05:00:07.747556819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:00:07.750637 containerd[1566]: time="2025-10-13T05:00:07.750354801Z" level=info msg="CreateContainer within sandbox \"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:00:07.756042 containerd[1566]: time="2025-10-13T05:00:07.755983410Z" level=info msg="Container 02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:07.761678 containerd[1566]: time="2025-10-13T05:00:07.761620861Z" level=info msg="CreateContainer within sandbox \"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a\"" Oct 13 05:00:07.763238 containerd[1566]: time="2025-10-13T05:00:07.762113276Z" level=info msg="StartContainer for \"02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a\"" Oct 13 05:00:07.763238 containerd[1566]: time="2025-10-13T05:00:07.763152237Z" level=info msg="connecting to shim 02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a" address="unix:///run/containerd/s/3d115aea4440b25dd10b3a8db75f51c562dab73c317e1074977bd7cc3e3147c9" protocol=ttrpc version=3 Oct 13 05:00:07.783465 systemd[1]: Started cri-containerd-02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a.scope - libcontainer container 02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a. Oct 13 05:00:07.819410 containerd[1566]: time="2025-10-13T05:00:07.819320346Z" level=info msg="StartContainer for \"02e18853b468f60cc8541456520f40ec898bdcb4ddc65c7f0e97ee9fe2b1dc3a\" returns successfully" Oct 13 05:00:08.052328 containerd[1566]: time="2025-10-13T05:00:08.052250769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-bmpbn,Uid:935f00e5-97ae-4f08-a4d5-43a47eb3b9e5,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:00:08.151799 systemd-networkd[1466]: cali48373ba3932: Link UP Oct 13 05:00:08.152930 systemd-networkd[1466]: cali48373ba3932: Gained carrier Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.087 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--558944944--bmpbn-eth0 calico-apiserver-558944944- calico-apiserver 935f00e5-97ae-4f08-a4d5-43a47eb3b9e5 813 0 2025-10-13 04:59:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:558944944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-558944944-bmpbn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48373ba3932 [] [] }} ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.088 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.113 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" HandleID="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Workload="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.113 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" HandleID="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Workload="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-558944944-bmpbn", "timestamp":"2025-10-13 05:00:08.11302061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.113 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.113 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.113 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.122 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.125 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.129 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.131 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.134 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.134 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.135 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1 Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.139 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.145 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.145 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" host="localhost" Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.145 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:08.165900 containerd[1566]: 2025-10-13 05:00:08.145 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" HandleID="k8s-pod-network.06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Workload="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.147 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--558944944--bmpbn-eth0", GenerateName:"calico-apiserver-558944944-", Namespace:"calico-apiserver", SelfLink:"", UID:"935f00e5-97ae-4f08-a4d5-43a47eb3b9e5", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558944944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-558944944-bmpbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48373ba3932", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.148 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.148 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48373ba3932 ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.153 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.153 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--558944944--bmpbn-eth0", GenerateName:"calico-apiserver-558944944-", Namespace:"calico-apiserver", SelfLink:"", UID:"935f00e5-97ae-4f08-a4d5-43a47eb3b9e5", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558944944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1", Pod:"calico-apiserver-558944944-bmpbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48373ba3932", MAC:"a2:4c:00:85:39:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:08.167115 containerd[1566]: 2025-10-13 05:00:08.163 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-bmpbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--bmpbn-eth0" Oct 13 05:00:08.185063 containerd[1566]: time="2025-10-13T05:00:08.185017757Z" level=info msg="connecting to shim 06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1" address="unix:///run/containerd/s/3aab711744bcbe7814b5136410de769b3e5bff24e0be95c26c42fcd9e990edcc" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:08.205451 systemd[1]: Started cri-containerd-06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1.scope - libcontainer container 06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1. Oct 13 05:00:08.216501 kubelet[2694]: E1013 05:00:08.216057 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:08.222975 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:08.233200 kubelet[2694]: I1013 05:00:08.232902 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xnbfg" podStartSLOduration=32.232871175 podStartE2EDuration="32.232871175s" podCreationTimestamp="2025-10-13 04:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:08.231845502 +0000 UTC m=+37.275075645" watchObservedRunningTime="2025-10-13 05:00:08.232871175 +0000 UTC m=+37.276101318" Oct 13 05:00:08.264717 containerd[1566]: time="2025-10-13T05:00:08.264148242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-bmpbn,Uid:935f00e5-97ae-4f08-a4d5-43a47eb3b9e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1\"" Oct 13 05:00:08.329409 systemd-networkd[1466]: cali3ea37494a74: Gained IPv6LL Oct 13 05:00:08.584400 systemd-networkd[1466]: cali53061ecbdca: Gained IPv6LL Oct 13 05:00:09.052690 kubelet[2694]: E1013 05:00:09.052623 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:09.053405 containerd[1566]: time="2025-10-13T05:00:09.053315658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cd7576bc-wbmnx,Uid:d0623c40-6d2f-4c7a-804a-d03c33bb837b,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:09.053461 containerd[1566]: time="2025-10-13T05:00:09.053380150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2q7gq,Uid:2c61a005-e9cb-4c80-b87f-0ab572e03b5f,Namespace:kube-system,Attempt:0,}" Oct 13 05:00:09.173098 systemd-networkd[1466]: calib409575b190: Link UP Oct 13 05:00:09.173610 systemd-networkd[1466]: calib409575b190: Gained carrier Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.101 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0 calico-kube-controllers-78cd7576bc- calico-system d0623c40-6d2f-4c7a-804a-d03c33bb837b 815 0 2025-10-13 04:59:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78cd7576bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78cd7576bc-wbmnx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib409575b190 [] [] }} ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.101 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.127 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" HandleID="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Workload="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.127 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" HandleID="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Workload="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78cd7576bc-wbmnx", "timestamp":"2025-10-13 05:00:09.127503167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.127 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.127 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.127 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.139 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.146 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.152 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.153 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.156 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.156 [INFO][4493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.157 [INFO][4493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297 Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.161 [INFO][4493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" host="localhost" Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:09.189405 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" HandleID="k8s-pod-network.6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Workload="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.171 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0", GenerateName:"calico-kube-controllers-78cd7576bc-", Namespace:"calico-system", SelfLink:"", UID:"d0623c40-6d2f-4c7a-804a-d03c33bb837b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cd7576bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78cd7576bc-wbmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib409575b190", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.171 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.171 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib409575b190 ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.173 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.173 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0", GenerateName:"calico-kube-controllers-78cd7576bc-", Namespace:"calico-system", SelfLink:"", UID:"d0623c40-6d2f-4c7a-804a-d03c33bb837b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78cd7576bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297", Pod:"calico-kube-controllers-78cd7576bc-wbmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib409575b190", MAC:"6a:7d:a2:4e:95:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:09.190654 containerd[1566]: 2025-10-13 05:00:09.187 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" Namespace="calico-system" Pod="calico-kube-controllers-78cd7576bc-wbmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78cd7576bc--wbmnx-eth0" Oct 13 05:00:09.222719 containerd[1566]: time="2025-10-13T05:00:09.222683457Z" level=info msg="connecting to shim 6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297" address="unix:///run/containerd/s/893f8a0a70133b43dc9f5f43bbd135491d596e2c2b4addc68f010296ac7385fb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:09.224883 kubelet[2694]: E1013 05:00:09.224856 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:09.264411 systemd[1]: Started cri-containerd-6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297.scope - libcontainer container 6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297. Oct 13 05:00:09.278641 systemd-networkd[1466]: califa05028da53: Link UP Oct 13 05:00:09.279952 systemd-networkd[1466]: califa05028da53: Gained carrier Oct 13 05:00:09.282045 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.107 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0 coredns-668d6bf9bc- kube-system 2c61a005-e9cb-4c80-b87f-0ab572e03b5f 810 0 2025-10-13 04:59:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2q7gq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa05028da53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.108 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.136 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" HandleID="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Workload="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.136 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" HandleID="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Workload="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137e40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2q7gq", "timestamp":"2025-10-13 05:00:09.136600223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.136 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.168 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.241 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.246 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.254 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.256 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.258 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.258 [INFO][4499] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.259 [INFO][4499] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1 Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.263 [INFO][4499] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.270 [INFO][4499] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.272 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" host="localhost" Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.272 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:09.295644 containerd[1566]: 2025-10-13 05:00:09.272 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" HandleID="k8s-pod-network.f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Workload="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.276 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c61a005-e9cb-4c80-b87f-0ab572e03b5f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2q7gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa05028da53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.276 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.276 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa05028da53 ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.280 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.281 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2c61a005-e9cb-4c80-b87f-0ab572e03b5f", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1", Pod:"coredns-668d6bf9bc-2q7gq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa05028da53", MAC:"f6:b4:2e:d9:ae:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:09.296669 containerd[1566]: 2025-10-13 05:00:09.293 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" Namespace="kube-system" Pod="coredns-668d6bf9bc-2q7gq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2q7gq-eth0" Oct 13 05:00:09.322818 containerd[1566]: time="2025-10-13T05:00:09.322724993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78cd7576bc-wbmnx,Uid:d0623c40-6d2f-4c7a-804a-d03c33bb837b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297\"" Oct 13 05:00:09.347687 containerd[1566]: time="2025-10-13T05:00:09.347631128Z" level=info msg="connecting to shim f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1" address="unix:///run/containerd/s/888055329f9086a44b748625017243db7667466034e64f957524bb92718d6ade" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:09.369433 systemd[1]: Started cri-containerd-f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1.scope - libcontainer container f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1. Oct 13 05:00:09.383718 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:09.403740 containerd[1566]: time="2025-10-13T05:00:09.403624484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2q7gq,Uid:2c61a005-e9cb-4c80-b87f-0ab572e03b5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1\"" Oct 13 05:00:09.404506 kubelet[2694]: E1013 05:00:09.404482 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:09.407005 containerd[1566]: time="2025-10-13T05:00:09.406970453Z" level=info msg="CreateContainer within sandbox \"f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:00:09.416878 containerd[1566]: time="2025-10-13T05:00:09.416742872Z" level=info msg="Container 9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:09.421080 containerd[1566]: time="2025-10-13T05:00:09.421032693Z" level=info msg="CreateContainer within sandbox \"f84e0c660243d9ff8192c7e18654bc5de969d43f81b47683fadab927f6adfab1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241\"" Oct 13 05:00:09.421568 containerd[1566]: time="2025-10-13T05:00:09.421540066Z" level=info msg="StartContainer for \"9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241\"" Oct 13 05:00:09.422569 containerd[1566]: time="2025-10-13T05:00:09.422536927Z" level=info msg="connecting to shim 9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241" address="unix:///run/containerd/s/888055329f9086a44b748625017243db7667466034e64f957524bb92718d6ade" protocol=ttrpc version=3 Oct 13 05:00:09.445439 systemd[1]: Started cri-containerd-9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241.scope - libcontainer container 9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241. Oct 13 05:00:09.480320 containerd[1566]: time="2025-10-13T05:00:09.480236714Z" level=info msg="StartContainer for \"9eb5de33ba584f489b3253a27a4b0c313e153cc6c5fd196ab50b498a97526241\" returns successfully" Oct 13 05:00:10.184620 systemd-networkd[1466]: cali48373ba3932: Gained IPv6LL Oct 13 05:00:10.228963 kubelet[2694]: E1013 05:00:10.228904 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:10.230326 kubelet[2694]: E1013 05:00:10.230287 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:10.239759 kubelet[2694]: I1013 05:00:10.239703 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2q7gq" podStartSLOduration=34.239688958 podStartE2EDuration="34.239688958s" podCreationTimestamp="2025-10-13 04:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:00:10.239675556 +0000 UTC m=+39.282905739" watchObservedRunningTime="2025-10-13 05:00:10.239688958 +0000 UTC m=+39.282919141" Oct 13 05:00:10.760444 systemd-networkd[1466]: calib409575b190: Gained IPv6LL Oct 13 05:00:10.952511 systemd-networkd[1466]: califa05028da53: Gained IPv6LL Oct 13 05:00:11.052656 containerd[1566]: time="2025-10-13T05:00:11.052515976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qkkhc,Uid:cea21309-0fb4-43ab-bd45-b6e3c15908fb,Namespace:calico-system,Attempt:0,}" Oct 13 05:00:11.053375 containerd[1566]: time="2025-10-13T05:00:11.053323715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-f5mh9,Uid:ee621bdf-5dda-43d7-8bab-48a4b972e452,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:00:11.162481 systemd-networkd[1466]: califc701784601: Link UP Oct 13 05:00:11.163093 systemd-networkd[1466]: califc701784601: Gained carrier Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.091 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--qkkhc-eth0 goldmane-54d579b49d- calico-system cea21309-0fb4-43ab-bd45-b6e3c15908fb 812 0 2025-10-13 04:59:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-qkkhc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califc701784601 [] [] }} ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.091 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.119 [INFO][4694] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" HandleID="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Workload="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.120 [INFO][4694] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" HandleID="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Workload="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-qkkhc", "timestamp":"2025-10-13 05:00:11.119863763 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.120 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.120 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.120 [INFO][4694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.129 [INFO][4694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.135 [INFO][4694] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.139 [INFO][4694] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.141 [INFO][4694] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.143 [INFO][4694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.143 [INFO][4694] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.144 [INFO][4694] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476 Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.148 [INFO][4694] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.154 [INFO][4694] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.154 [INFO][4694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" host="localhost" Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.155 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:11.179137 containerd[1566]: 2025-10-13 05:00:11.155 [INFO][4694] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" HandleID="k8s-pod-network.d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Workload="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.157 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qkkhc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cea21309-0fb4-43ab-bd45-b6e3c15908fb", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-qkkhc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc701784601", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.159 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.159 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc701784601 ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.164 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.164 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qkkhc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cea21309-0fb4-43ab-bd45-b6e3c15908fb", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476", Pod:"goldmane-54d579b49d-qkkhc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califc701784601", MAC:"56:d0:30:5d:b3:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:11.179758 containerd[1566]: 2025-10-13 05:00:11.176 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" Namespace="calico-system" Pod="goldmane-54d579b49d-qkkhc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qkkhc-eth0" Oct 13 05:00:11.201863 containerd[1566]: time="2025-10-13T05:00:11.201787538Z" level=info msg="connecting to shim d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476" address="unix:///run/containerd/s/065b8c53667b5dcaef6574da4ff833cc7be560cc87d9b0d2f5cd036c6bdf7f53" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:11.239895 kubelet[2694]: E1013 05:00:11.239828 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:11.257408 systemd[1]: Started cri-containerd-d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476.scope - libcontainer container d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476. Oct 13 05:00:11.273843 systemd-networkd[1466]: calif1a95a78ea0: Link UP Oct 13 05:00:11.274678 systemd-networkd[1466]: calif1a95a78ea0: Gained carrier Oct 13 05:00:11.278314 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.103 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--558944944--f5mh9-eth0 calico-apiserver-558944944- calico-apiserver ee621bdf-5dda-43d7-8bab-48a4b972e452 814 0 2025-10-13 04:59:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:558944944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-558944944-f5mh9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1a95a78ea0 [] [] }} ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.103 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.130 [INFO][4701] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" HandleID="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Workload="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.130 [INFO][4701] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" HandleID="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Workload="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-558944944-f5mh9", "timestamp":"2025-10-13 05:00:11.130126489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.130 [INFO][4701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.154 [INFO][4701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.155 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.230 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.238 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.246 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.248 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.252 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.252 [INFO][4701] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.254 [INFO][4701] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0 Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.258 [INFO][4701] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.266 [INFO][4701] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.266 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" host="localhost" Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.266 [INFO][4701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:00:11.292561 containerd[1566]: 2025-10-13 05:00:11.266 [INFO][4701] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" HandleID="k8s-pod-network.4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Workload="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.269 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--558944944--f5mh9-eth0", GenerateName:"calico-apiserver-558944944-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee621bdf-5dda-43d7-8bab-48a4b972e452", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558944944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-558944944-f5mh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a95a78ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.270 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.270 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1a95a78ea0 ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.274 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.278 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--558944944--f5mh9-eth0", GenerateName:"calico-apiserver-558944944-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee621bdf-5dda-43d7-8bab-48a4b972e452", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 4, 59, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558944944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0", Pod:"calico-apiserver-558944944-f5mh9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a95a78ea0", MAC:"ee:c6:04:11:49:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:00:11.294732 containerd[1566]: 2025-10-13 05:00:11.290 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" Namespace="calico-apiserver" Pod="calico-apiserver-558944944-f5mh9" WorkloadEndpoint="localhost-k8s-calico--apiserver--558944944--f5mh9-eth0" Oct 13 05:00:11.311428 containerd[1566]: time="2025-10-13T05:00:11.311327944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qkkhc,Uid:cea21309-0fb4-43ab-bd45-b6e3c15908fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476\"" Oct 13 05:00:11.321550 containerd[1566]: time="2025-10-13T05:00:11.321514297Z" level=info msg="connecting to shim 4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0" address="unix:///run/containerd/s/64b0e2f482f07e24d239324312cbd7efad9df774202445c3cb420898aa0f6d46" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:00:11.347656 systemd[1]: Started cri-containerd-4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0.scope - libcontainer container 4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0. Oct 13 05:00:11.359802 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:00:11.388803 containerd[1566]: time="2025-10-13T05:00:11.388764107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558944944-f5mh9,Uid:ee621bdf-5dda-43d7-8bab-48a4b972e452,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0\"" Oct 13 05:00:12.232429 systemd-networkd[1466]: califc701784601: Gained IPv6LL Oct 13 05:00:12.242790 kubelet[2694]: E1013 05:00:12.242736 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:12.553841 systemd-networkd[1466]: calif1a95a78ea0: Gained IPv6LL Oct 13 05:00:12.580708 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:38002.service - OpenSSH per-connection server daemon (10.0.0.1:38002). Oct 13 05:00:12.652026 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 38002 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:12.653889 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:12.658322 systemd-logind[1533]: New session 9 of user core. Oct 13 05:00:12.670442 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:00:12.831195 sshd[4837]: Connection closed by 10.0.0.1 port 38002 Oct 13 05:00:12.831631 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:12.835379 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:38002.service: Deactivated successfully. Oct 13 05:00:12.837291 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:00:12.839526 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:00:12.840880 systemd-logind[1533]: Removed session 9. Oct 13 05:00:14.836693 containerd[1566]: time="2025-10-13T05:00:14.836414866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:14.837087 containerd[1566]: time="2025-10-13T05:00:14.837013041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 05:00:14.837810 containerd[1566]: time="2025-10-13T05:00:14.837775923Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:14.840061 containerd[1566]: time="2025-10-13T05:00:14.840025521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:14.840562 containerd[1566]: time="2025-10-13T05:00:14.840522720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 7.092930534s" Oct 13 05:00:14.840562 containerd[1566]: time="2025-10-13T05:00:14.840560166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 05:00:14.841528 containerd[1566]: time="2025-10-13T05:00:14.841498316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:00:14.843449 containerd[1566]: time="2025-10-13T05:00:14.843420182Z" level=info msg="CreateContainer within sandbox \"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:00:14.851635 containerd[1566]: time="2025-10-13T05:00:14.851504109Z" level=info msg="Container f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:14.867277 containerd[1566]: time="2025-10-13T05:00:14.867228933Z" level=info msg="CreateContainer within sandbox \"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a\"" Oct 13 05:00:14.868144 containerd[1566]: time="2025-10-13T05:00:14.868120194Z" level=info msg="StartContainer for \"f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a\"" Oct 13 05:00:14.869533 containerd[1566]: time="2025-10-13T05:00:14.869506695Z" level=info msg="connecting to shim f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a" address="unix:///run/containerd/s/d9203a02dedfb0e9881f124aebf459f88ad71c0182a268be2fa6f4b9165e168a" protocol=ttrpc version=3 Oct 13 05:00:14.900448 systemd[1]: Started cri-containerd-f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a.scope - libcontainer container f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a. Oct 13 05:00:14.936888 containerd[1566]: time="2025-10-13T05:00:14.936848538Z" level=info msg="StartContainer for \"f5b7a8cf474956a60374b206d78b704c182941cdfefba03fae40102b3323711a\" returns successfully" Oct 13 05:00:17.738521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986492336.mount: Deactivated successfully. Oct 13 05:00:17.813643 kubelet[2694]: I1013 05:00:17.813596 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:17.850822 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:39216.service - OpenSSH per-connection server daemon (10.0.0.1:39216). Oct 13 05:00:17.914527 sshd[4890]: Accepted publickey for core from 10.0.0.1 port 39216 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:17.916074 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:17.919860 systemd-logind[1533]: New session 10 of user core. Oct 13 05:00:17.927437 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:00:17.946892 containerd[1566]: time="2025-10-13T05:00:17.946841352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\" id:\"01fb8ceb61770d7ffe12697eac3674a7ff40e0ff49e4514f397541c76d965b06\" pid:4904 exited_at:{seconds:1760331617 nanos:946555709}" Oct 13 05:00:18.032448 containerd[1566]: time="2025-10-13T05:00:18.032405215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\" id:\"2a61cc41c7ba8c60a05741b989be2405eda4f52a5c08f513a987f96fb74411a1\" pid:4931 exited_at:{seconds:1760331618 nanos:32122334}" Oct 13 05:00:18.087924 sshd[4918]: Connection closed by 10.0.0.1 port 39216 Oct 13 05:00:18.088461 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:18.093067 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:00:18.093393 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:39216.service: Deactivated successfully. Oct 13 05:00:18.097669 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:00:18.099864 systemd-logind[1533]: Removed session 10. Oct 13 05:00:18.421155 containerd[1566]: time="2025-10-13T05:00:18.421051269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.421784 containerd[1566]: time="2025-10-13T05:00:18.421741249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 05:00:18.422740 containerd[1566]: time="2025-10-13T05:00:18.422714951Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.425047 containerd[1566]: time="2025-10-13T05:00:18.425002044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:18.425649 containerd[1566]: time="2025-10-13T05:00:18.425619734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.584086412s" Oct 13 05:00:18.425723 containerd[1566]: time="2025-10-13T05:00:18.425651938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 05:00:18.427077 containerd[1566]: time="2025-10-13T05:00:18.426927244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:00:18.428075 containerd[1566]: time="2025-10-13T05:00:18.428039686Z" level=info msg="CreateContainer within sandbox \"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:00:18.441279 containerd[1566]: time="2025-10-13T05:00:18.440872474Z" level=info msg="Container 2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:18.447541 containerd[1566]: time="2025-10-13T05:00:18.447502199Z" level=info msg="CreateContainer within sandbox \"d9089930c72e570a5aedc4f90cd642adeca4acbeda645fc81352b132b18e17e8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414\"" Oct 13 05:00:18.448877 containerd[1566]: time="2025-10-13T05:00:18.448596038Z" level=info msg="StartContainer for \"2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414\"" Oct 13 05:00:18.451183 containerd[1566]: time="2025-10-13T05:00:18.451035073Z" level=info msg="connecting to shim 2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414" address="unix:///run/containerd/s/3d115aea4440b25dd10b3a8db75f51c562dab73c317e1074977bd7cc3e3147c9" protocol=ttrpc version=3 Oct 13 05:00:18.474418 systemd[1]: Started cri-containerd-2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414.scope - libcontainer container 2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414. Oct 13 05:00:18.511796 containerd[1566]: time="2025-10-13T05:00:18.511755352Z" level=info msg="StartContainer for \"2bddf013971ad03167a2624d2be77f3f82c71ff99024bb18f272df0bf4fcd414\" returns successfully" Oct 13 05:00:22.723393 containerd[1566]: time="2025-10-13T05:00:22.723336604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:22.724190 containerd[1566]: time="2025-10-13T05:00:22.724140633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 05:00:22.725057 containerd[1566]: time="2025-10-13T05:00:22.725021231Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:22.727296 containerd[1566]: time="2025-10-13T05:00:22.727249092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:22.727987 containerd[1566]: time="2025-10-13T05:00:22.727945626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.300949812s" Oct 13 05:00:22.728021 containerd[1566]: time="2025-10-13T05:00:22.727984231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 05:00:22.729012 containerd[1566]: time="2025-10-13T05:00:22.728793821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:00:22.730887 containerd[1566]: time="2025-10-13T05:00:22.730858940Z" level=info msg="CreateContainer within sandbox \"06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:00:22.736368 containerd[1566]: time="2025-10-13T05:00:22.736329118Z" level=info msg="Container c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:22.744741 containerd[1566]: time="2025-10-13T05:00:22.744700088Z" level=info msg="CreateContainer within sandbox \"06c6e1a917aee13906c7966d2744c1e043dd928410480db68fd95797d40e77e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc\"" Oct 13 05:00:22.745439 containerd[1566]: time="2025-10-13T05:00:22.745397303Z" level=info msg="StartContainer for \"c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc\"" Oct 13 05:00:22.746570 containerd[1566]: time="2025-10-13T05:00:22.746537016Z" level=info msg="connecting to shim c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc" address="unix:///run/containerd/s/3aab711744bcbe7814b5136410de769b3e5bff24e0be95c26c42fcd9e990edcc" protocol=ttrpc version=3 Oct 13 05:00:22.765539 systemd[1]: Started cri-containerd-c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc.scope - libcontainer container c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc. Oct 13 05:00:22.816405 containerd[1566]: time="2025-10-13T05:00:22.816298075Z" level=info msg="StartContainer for \"c13b275a541fa8e5c135c166420eab982440a70354c718bca2339d5139251fdc\" returns successfully" Oct 13 05:00:23.099556 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:39224.service - OpenSSH per-connection server daemon (10.0.0.1:39224). Oct 13 05:00:23.171595 sshd[5052]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:23.173316 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:23.177316 systemd-logind[1533]: New session 11 of user core. Oct 13 05:00:23.181410 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:00:23.286032 kubelet[2694]: I1013 05:00:23.285966 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-558944944-bmpbn" podStartSLOduration=23.824089546 podStartE2EDuration="38.285949406s" podCreationTimestamp="2025-10-13 04:59:45 +0000 UTC" firstStartedPulling="2025-10-13 05:00:08.266846469 +0000 UTC m=+37.310076652" lastFinishedPulling="2025-10-13 05:00:22.728706329 +0000 UTC m=+51.771936512" observedRunningTime="2025-10-13 05:00:23.28560572 +0000 UTC m=+52.328835903" watchObservedRunningTime="2025-10-13 05:00:23.285949406 +0000 UTC m=+52.329179589" Oct 13 05:00:23.287234 kubelet[2694]: I1013 05:00:23.286060 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56bd7d774f-82n6g" podStartSLOduration=6.129051572 podStartE2EDuration="23.28605546s" podCreationTimestamp="2025-10-13 05:00:00 +0000 UTC" firstStartedPulling="2025-10-13 05:00:01.269643955 +0000 UTC m=+30.312874138" lastFinishedPulling="2025-10-13 05:00:18.426647883 +0000 UTC m=+47.469878026" observedRunningTime="2025-10-13 05:00:19.277001591 +0000 UTC m=+48.320231774" watchObservedRunningTime="2025-10-13 05:00:23.28605546 +0000 UTC m=+52.329285643" Oct 13 05:00:23.373528 sshd[5055]: Connection closed by 10.0.0.1 port 39224 Oct 13 05:00:23.373400 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:23.377185 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:39224.service: Deactivated successfully. Oct 13 05:00:23.378949 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:00:23.379648 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:00:23.380720 systemd-logind[1533]: Removed session 11. Oct 13 05:00:24.277315 kubelet[2694]: I1013 05:00:24.277129 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:27.584865 containerd[1566]: time="2025-10-13T05:00:27.584817187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:27.586043 containerd[1566]: time="2025-10-13T05:00:27.586017097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 05:00:27.589905 containerd[1566]: time="2025-10-13T05:00:27.589869219Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:27.592506 containerd[1566]: time="2025-10-13T05:00:27.592395535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:27.592942 containerd[1566]: time="2025-10-13T05:00:27.592869794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.864047249s" Oct 13 05:00:27.592942 containerd[1566]: time="2025-10-13T05:00:27.592938243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 05:00:27.594797 containerd[1566]: time="2025-10-13T05:00:27.594751310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:00:27.605348 containerd[1566]: time="2025-10-13T05:00:27.605313031Z" level=info msg="CreateContainer within sandbox \"6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:00:27.612295 containerd[1566]: time="2025-10-13T05:00:27.611373710Z" level=info msg="Container 8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:27.618165 containerd[1566]: time="2025-10-13T05:00:27.618110353Z" level=info msg="CreateContainer within sandbox \"6bbdae4b980024366cbd97b8624ec55edf7b1e2927e289e23ddd182e53f2d297\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\"" Oct 13 05:00:27.619374 containerd[1566]: time="2025-10-13T05:00:27.618601934Z" level=info msg="StartContainer for \"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\"" Oct 13 05:00:27.619870 containerd[1566]: time="2025-10-13T05:00:27.619699232Z" level=info msg="connecting to shim 8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0" address="unix:///run/containerd/s/893f8a0a70133b43dc9f5f43bbd135491d596e2c2b4addc68f010296ac7385fb" protocol=ttrpc version=3 Oct 13 05:00:27.638503 systemd[1]: Started cri-containerd-8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0.scope - libcontainer container 8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0. Oct 13 05:00:27.670139 containerd[1566]: time="2025-10-13T05:00:27.670102419Z" level=info msg="StartContainer for \"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\" returns successfully" Oct 13 05:00:28.302712 kubelet[2694]: I1013 05:00:28.301427 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78cd7576bc-wbmnx" podStartSLOduration=22.032702675 podStartE2EDuration="40.301410052s" podCreationTimestamp="2025-10-13 04:59:48 +0000 UTC" firstStartedPulling="2025-10-13 05:00:09.32490515 +0000 UTC m=+38.368135333" lastFinishedPulling="2025-10-13 05:00:27.593612527 +0000 UTC m=+56.636842710" observedRunningTime="2025-10-13 05:00:28.299944111 +0000 UTC m=+57.343174294" watchObservedRunningTime="2025-10-13 05:00:28.301410052 +0000 UTC m=+57.344640235" Oct 13 05:00:28.336079 containerd[1566]: time="2025-10-13T05:00:28.336034368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\" id:\"133e18598e7e6f5c5b5bf40f2f4df160b1b30e777b3bfe856437cf7adb13ec11\" pid:5130 exited_at:{seconds:1760331628 nanos:330898334}" Oct 13 05:00:28.387992 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:38588.service - OpenSSH per-connection server daemon (10.0.0.1:38588). Oct 13 05:00:28.456599 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 38588 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:28.457536 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:28.461337 systemd-logind[1533]: New session 12 of user core. Oct 13 05:00:28.485473 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:00:28.644543 sshd[5144]: Connection closed by 10.0.0.1 port 38588 Oct 13 05:00:28.644829 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:28.649490 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:38588.service: Deactivated successfully. Oct 13 05:00:28.652945 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:00:28.653712 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:00:28.654825 systemd-logind[1533]: Removed session 12. Oct 13 05:00:33.173543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33343636.mount: Deactivated successfully. Oct 13 05:00:33.663001 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:38598.service - OpenSSH per-connection server daemon (10.0.0.1:38598). Oct 13 05:00:33.739546 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 38598 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:33.741997 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:33.747038 systemd-logind[1533]: New session 13 of user core. Oct 13 05:00:33.759404 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:00:33.935665 sshd[5169]: Connection closed by 10.0.0.1 port 38598 Oct 13 05:00:33.937469 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:33.953562 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:38614.service - OpenSSH per-connection server daemon (10.0.0.1:38614). Oct 13 05:00:33.954059 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:38598.service: Deactivated successfully. Oct 13 05:00:33.962818 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:00:33.965296 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:00:33.967030 systemd-logind[1533]: Removed session 13. Oct 13 05:00:34.021806 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 38614 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:34.023116 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:34.027747 systemd-logind[1533]: New session 14 of user core. Oct 13 05:00:34.040424 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:00:34.331739 sshd[5187]: Connection closed by 10.0.0.1 port 38614 Oct 13 05:00:34.330202 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:34.341480 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:38614.service: Deactivated successfully. Oct 13 05:00:34.344769 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:00:34.346904 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:00:34.350826 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Oct 13 05:00:34.353364 systemd-logind[1533]: Removed session 14. Oct 13 05:00:34.408705 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:34.410539 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:34.416059 systemd-logind[1533]: New session 15 of user core. Oct 13 05:00:34.427411 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:00:34.606406 sshd[5206]: Connection closed by 10.0.0.1 port 38616 Oct 13 05:00:34.608094 containerd[1566]: time="2025-10-13T05:00:34.608054573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:34.608139 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:34.609224 containerd[1566]: time="2025-10-13T05:00:34.609188704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 05:00:34.610283 containerd[1566]: time="2025-10-13T05:00:34.609952593Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:34.612320 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:38616.service: Deactivated successfully. Oct 13 05:00:34.613217 containerd[1566]: time="2025-10-13T05:00:34.613186087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:34.614087 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:00:34.614189 containerd[1566]: time="2025-10-13T05:00:34.614166480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 7.019380006s" Oct 13 05:00:34.614318 containerd[1566]: time="2025-10-13T05:00:34.614194884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 05:00:34.615387 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:00:34.615891 containerd[1566]: time="2025-10-13T05:00:34.615867877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:00:34.617841 containerd[1566]: time="2025-10-13T05:00:34.617814062Z" level=info msg="CreateContainer within sandbox \"d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:00:34.618306 systemd-logind[1533]: Removed session 15. Oct 13 05:00:34.629860 containerd[1566]: time="2025-10-13T05:00:34.629817611Z" level=info msg="Container 6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:34.637115 containerd[1566]: time="2025-10-13T05:00:34.637058409Z" level=info msg="CreateContainer within sandbox \"d3fd7087ade30b4d61b1c00977b0ac0b94d75e6599742fe7c84588d810824476\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\"" Oct 13 05:00:34.637509 containerd[1566]: time="2025-10-13T05:00:34.637476097Z" level=info msg="StartContainer for \"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\"" Oct 13 05:00:34.639864 containerd[1566]: time="2025-10-13T05:00:34.639816128Z" level=info msg="connecting to shim 6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d" address="unix:///run/containerd/s/065b8c53667b5dcaef6574da4ff833cc7be560cc87d9b0d2f5cd036c6bdf7f53" protocol=ttrpc version=3 Oct 13 05:00:34.661444 systemd[1]: Started cri-containerd-6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d.scope - libcontainer container 6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d. Oct 13 05:00:34.699314 containerd[1566]: time="2025-10-13T05:00:34.699278528Z" level=info msg="StartContainer for \"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\" returns successfully" Oct 13 05:00:35.322751 kubelet[2694]: I1013 05:00:35.322603 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qkkhc" podStartSLOduration=25.02176787 podStartE2EDuration="48.32258568s" podCreationTimestamp="2025-10-13 04:59:47 +0000 UTC" firstStartedPulling="2025-10-13 05:00:11.314370388 +0000 UTC m=+40.357600531" lastFinishedPulling="2025-10-13 05:00:34.615188158 +0000 UTC m=+63.658418341" observedRunningTime="2025-10-13 05:00:35.321407104 +0000 UTC m=+64.364637287" watchObservedRunningTime="2025-10-13 05:00:35.32258568 +0000 UTC m=+64.365815863" Oct 13 05:00:35.404916 containerd[1566]: time="2025-10-13T05:00:35.404868355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\" id:\"2605bf78d0ed30038b4f37ec6f9c04b19244fefb66d2902cb3d4ac69e5b0fb93\" pid:5266 exit_status:1 exited_at:{seconds:1760331635 nanos:404497193}" Oct 13 05:00:36.370683 containerd[1566]: time="2025-10-13T05:00:36.370604704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\" id:\"951c63a99e21762c9bbbb7c9dbc57ad5f70ec4b732074533507e67b52ac6724a\" pid:5291 exit_status:1 exited_at:{seconds:1760331636 nanos:370300629}" Oct 13 05:00:36.687953 containerd[1566]: time="2025-10-13T05:00:36.687836536Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:36.688584 containerd[1566]: time="2025-10-13T05:00:36.688531815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:00:36.690266 containerd[1566]: time="2025-10-13T05:00:36.690220447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.074322046s" Oct 13 05:00:36.690321 containerd[1566]: time="2025-10-13T05:00:36.690274013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 05:00:36.691047 containerd[1566]: time="2025-10-13T05:00:36.691028499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:00:36.692138 containerd[1566]: time="2025-10-13T05:00:36.692101861Z" level=info msg="CreateContainer within sandbox \"4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:00:36.699201 containerd[1566]: time="2025-10-13T05:00:36.699169704Z" level=info msg="Container a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:36.706525 containerd[1566]: time="2025-10-13T05:00:36.706469814Z" level=info msg="CreateContainer within sandbox \"4731a9266e53b05ef6a001b2c5ec20254d70218204947b73cde9fd40d65c66a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654\"" Oct 13 05:00:36.706913 containerd[1566]: time="2025-10-13T05:00:36.706892702Z" level=info msg="StartContainer for \"a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654\"" Oct 13 05:00:36.708050 containerd[1566]: time="2025-10-13T05:00:36.708024671Z" level=info msg="connecting to shim a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654" address="unix:///run/containerd/s/64b0e2f482f07e24d239324312cbd7efad9df774202445c3cb420898aa0f6d46" protocol=ttrpc version=3 Oct 13 05:00:36.731441 systemd[1]: Started cri-containerd-a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654.scope - libcontainer container a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654. Oct 13 05:00:36.771983 containerd[1566]: time="2025-10-13T05:00:36.771945339Z" level=info msg="StartContainer for \"a35f426aa9e6b9fa4e1088d20dbbbe558dd39ed291884cdca28905806cd78654\" returns successfully" Oct 13 05:00:38.367416 kubelet[2694]: I1013 05:00:38.367356 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-558944944-f5mh9" podStartSLOduration=28.066223669 podStartE2EDuration="53.367339228s" podCreationTimestamp="2025-10-13 04:59:45 +0000 UTC" firstStartedPulling="2025-10-13 05:00:11.389822849 +0000 UTC m=+40.433053032" lastFinishedPulling="2025-10-13 05:00:36.690938408 +0000 UTC m=+65.734168591" observedRunningTime="2025-10-13 05:00:37.327852854 +0000 UTC m=+66.371083037" watchObservedRunningTime="2025-10-13 05:00:38.367339228 +0000 UTC m=+67.410569371" Oct 13 05:00:39.622353 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:50444.service - OpenSSH per-connection server daemon (10.0.0.1:50444). Oct 13 05:00:39.690337 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 50444 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:39.691264 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:39.695339 systemd-logind[1533]: New session 16 of user core. Oct 13 05:00:39.701394 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:00:39.887236 sshd[5353]: Connection closed by 10.0.0.1 port 50444 Oct 13 05:00:39.887508 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:39.891837 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:50444.service: Deactivated successfully. Oct 13 05:00:39.894639 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:00:39.895753 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:00:39.897155 systemd-logind[1533]: Removed session 16. Oct 13 05:00:40.198103 containerd[1566]: time="2025-10-13T05:00:40.197990906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:40.198986 containerd[1566]: time="2025-10-13T05:00:40.198928490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 05:00:40.199876 containerd[1566]: time="2025-10-13T05:00:40.199840196Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:40.201498 containerd[1566]: time="2025-10-13T05:00:40.201467070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:00:40.202240 containerd[1566]: time="2025-10-13T05:00:40.202085526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 3.511030986s" Oct 13 05:00:40.202240 containerd[1566]: time="2025-10-13T05:00:40.202116403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 05:00:40.205411 containerd[1566]: time="2025-10-13T05:00:40.205377309Z" level=info msg="CreateContainer within sandbox \"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:00:40.215300 containerd[1566]: time="2025-10-13T05:00:40.215248417Z" level=info msg="Container 5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:00:40.220224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959578624.mount: Deactivated successfully. Oct 13 05:00:40.224207 containerd[1566]: time="2025-10-13T05:00:40.224163024Z" level=info msg="CreateContainer within sandbox \"be8dff61bcb9fea81cc00d4ea297a9ab8e5ba588a568208ce17ae34dcb6e3f25\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442\"" Oct 13 05:00:40.224918 containerd[1566]: time="2025-10-13T05:00:40.224892109Z" level=info msg="StartContainer for \"5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442\"" Oct 13 05:00:40.226229 containerd[1566]: time="2025-10-13T05:00:40.226199775Z" level=info msg="connecting to shim 5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442" address="unix:///run/containerd/s/d9203a02dedfb0e9881f124aebf459f88ad71c0182a268be2fa6f4b9165e168a" protocol=ttrpc version=3 Oct 13 05:00:40.257460 systemd[1]: Started cri-containerd-5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442.scope - libcontainer container 5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442. Oct 13 05:00:40.292082 containerd[1566]: time="2025-10-13T05:00:40.292032628Z" level=info msg="StartContainer for \"5e5b68c5ff94fb16ba449166862d07df48e5c78d2e4c55c42957c1e483556442\" returns successfully" Oct 13 05:00:40.336796 kubelet[2694]: I1013 05:00:40.336654 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kwwff" podStartSLOduration=19.432463021 podStartE2EDuration="52.336572624s" podCreationTimestamp="2025-10-13 04:59:48 +0000 UTC" firstStartedPulling="2025-10-13 05:00:07.299122766 +0000 UTC m=+36.342352949" lastFinishedPulling="2025-10-13 05:00:40.203232369 +0000 UTC m=+69.246462552" observedRunningTime="2025-10-13 05:00:40.335670396 +0000 UTC m=+69.378900579" watchObservedRunningTime="2025-10-13 05:00:40.336572624 +0000 UTC m=+69.379802767" Oct 13 05:00:41.052794 kubelet[2694]: E1013 05:00:41.052398 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:41.142351 kubelet[2694]: I1013 05:00:41.142314 2694 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:00:41.145564 kubelet[2694]: I1013 05:00:41.145419 2694 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:00:44.901662 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Oct 13 05:00:44.969888 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:44.971460 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:44.979341 systemd-logind[1533]: New session 17 of user core. Oct 13 05:00:44.990415 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:00:45.153592 sshd[5419]: Connection closed by 10.0.0.1 port 50450 Oct 13 05:00:45.154489 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:45.158348 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:50450.service: Deactivated successfully. Oct 13 05:00:45.160244 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:00:45.161004 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:00:45.161950 systemd-logind[1533]: Removed session 17. Oct 13 05:00:45.920504 kubelet[2694]: I1013 05:00:45.920410 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:00:48.037802 containerd[1566]: time="2025-10-13T05:00:48.037731502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\" id:\"40b6cc38cb6b3548dc222e9c3c292e43f3ec7f0cb2056b56091269107da17475\" pid:5446 exited_at:{seconds:1760331648 nanos:37380604}" Oct 13 05:00:50.169482 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Oct 13 05:00:50.240170 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:50.243167 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:50.248688 systemd-logind[1533]: New session 18 of user core. Oct 13 05:00:50.255896 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:00:50.405306 sshd[5463]: Connection closed by 10.0.0.1 port 46914 Oct 13 05:00:50.405175 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:50.408826 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:46914.service: Deactivated successfully. Oct 13 05:00:50.410671 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:00:50.411347 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:00:50.412126 systemd-logind[1533]: Removed session 18. Oct 13 05:00:52.222235 containerd[1566]: time="2025-10-13T05:00:52.222196005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\" id:\"69b365a128d173712c922a15dbe17245edb37b56557ee66438ab42a24676b05a\" pid:5491 exited_at:{seconds:1760331652 nanos:221934497}" Oct 13 05:00:55.427932 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:46934.service - OpenSSH per-connection server daemon (10.0.0.1:46934). Oct 13 05:00:55.473326 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 46934 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:00:55.474642 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:00:55.479346 systemd-logind[1533]: New session 19 of user core. Oct 13 05:00:55.486994 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:00:55.635788 sshd[5506]: Connection closed by 10.0.0.1 port 46934 Oct 13 05:00:55.636151 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Oct 13 05:00:55.641822 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:46934.service: Deactivated successfully. Oct 13 05:00:55.644046 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:00:55.644977 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:00:55.646324 systemd-logind[1533]: Removed session 19. Oct 13 05:00:56.052657 kubelet[2694]: E1013 05:00:56.052559 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:00:58.322582 containerd[1566]: time="2025-10-13T05:00:58.322547229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fef26723612bc15c457f4c7c36dbdfdd28f7a0fbe94d1aad08304a7bd927ed0\" id:\"581412c1bd31d07aaff193963d0ab5382175e7b741a5d865bfd2e91b78fab999\" pid:5531 exited_at:{seconds:1760331658 nanos:322355994}" Oct 13 05:01:00.650856 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:37908.service - OpenSSH per-connection server daemon (10.0.0.1:37908). Oct 13 05:01:00.717157 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 37908 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:00.718951 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:00.722897 systemd-logind[1533]: New session 20 of user core. Oct 13 05:01:00.730432 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:01:00.862129 sshd[5545]: Connection closed by 10.0.0.1 port 37908 Oct 13 05:01:00.862624 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:00.873676 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:37908.service: Deactivated successfully. Oct 13 05:01:00.876938 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:01:00.878371 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:01:00.881426 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:37910.service - OpenSSH per-connection server daemon (10.0.0.1:37910). Oct 13 05:01:00.881960 systemd-logind[1533]: Removed session 20. Oct 13 05:01:00.931715 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 37910 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:00.932853 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:00.937353 systemd-logind[1533]: New session 21 of user core. Oct 13 05:01:00.946415 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:01:01.152197 sshd[5562]: Connection closed by 10.0.0.1 port 37910 Oct 13 05:01:01.152651 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:01.162822 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:37910.service: Deactivated successfully. Oct 13 05:01:01.164711 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:01:01.165574 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:01:01.168694 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:37914.service - OpenSSH per-connection server daemon (10.0.0.1:37914). Oct 13 05:01:01.169639 systemd-logind[1533]: Removed session 21. Oct 13 05:01:01.229454 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 37914 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:01.230757 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:01.235957 systemd-logind[1533]: New session 22 of user core. Oct 13 05:01:01.249516 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:01:01.920735 sshd[5577]: Connection closed by 10.0.0.1 port 37914 Oct 13 05:01:01.921287 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:01.940593 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:37914.service: Deactivated successfully. Oct 13 05:01:01.944233 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:01:01.945992 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:01:01.952197 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:37920.service - OpenSSH per-connection server daemon (10.0.0.1:37920). Oct 13 05:01:01.953406 systemd-logind[1533]: Removed session 22. Oct 13 05:01:02.016532 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 37920 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:02.018419 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:02.023339 systemd-logind[1533]: New session 23 of user core. Oct 13 05:01:02.034464 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:01:02.366178 sshd[5598]: Connection closed by 10.0.0.1 port 37920 Oct 13 05:01:02.367729 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:02.379925 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:37920.service: Deactivated successfully. Oct 13 05:01:02.382778 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:01:02.386853 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:01:02.390589 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:37928.service - OpenSSH per-connection server daemon (10.0.0.1:37928). Oct 13 05:01:02.391121 systemd-logind[1533]: Removed session 23. Oct 13 05:01:02.447364 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 37928 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:02.448714 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:02.452885 systemd-logind[1533]: New session 24 of user core. Oct 13 05:01:02.462472 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:01:02.612614 sshd[5612]: Connection closed by 10.0.0.1 port 37928 Oct 13 05:01:02.612954 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:02.617426 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:37928.service: Deactivated successfully. Oct 13 05:01:02.619851 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:01:02.621055 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:01:02.622992 systemd-logind[1533]: Removed session 24. Oct 13 05:01:06.391089 containerd[1566]: time="2025-10-13T05:01:06.391048617Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6baa9e22a4cda7aa28139596bf4897bc4a3345b0bbecc867dda14b35e0abf52d\" id:\"ee58e751a1ba35ac34fdb26d087df0b08ace72b9a8b346bf246cbce2b928a023\" pid:5640 exited_at:{seconds:1760331666 nanos:390360939}" Oct 13 05:01:07.625037 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:37226.service - OpenSSH per-connection server daemon (10.0.0.1:37226). Oct 13 05:01:07.681284 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 37226 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:07.682720 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:07.686797 systemd-logind[1533]: New session 25 of user core. Oct 13 05:01:07.692442 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:01:07.821210 sshd[5659]: Connection closed by 10.0.0.1 port 37226 Oct 13 05:01:07.820741 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:07.824741 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:37226.service: Deactivated successfully. Oct 13 05:01:07.826469 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:01:07.827354 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:01:07.829387 systemd-logind[1533]: Removed session 25. Oct 13 05:01:09.051983 kubelet[2694]: E1013 05:01:09.051950 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:01:10.052486 kubelet[2694]: E1013 05:01:10.051953 2694 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:01:12.840018 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:37234.service - OpenSSH per-connection server daemon (10.0.0.1:37234). Oct 13 05:01:12.902908 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 37234 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:12.904013 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:12.907862 systemd-logind[1533]: New session 26 of user core. Oct 13 05:01:12.918915 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:01:13.060241 sshd[5677]: Connection closed by 10.0.0.1 port 37234 Oct 13 05:01:13.059644 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:13.063367 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:37234.service: Deactivated successfully. Oct 13 05:01:13.065996 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:01:13.066778 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:01:13.067936 systemd-logind[1533]: Removed session 26. Oct 13 05:01:18.034954 containerd[1566]: time="2025-10-13T05:01:18.034821167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ece9461fb5337460cacaf60930b89e3f3496988bfa32e83e81e9b4b1c8e0e09\" id:\"8d70687538872883a56ec1dfcb26332b3e397ed05c07385ea556602b93446942\" pid:5701 exited_at:{seconds:1760331678 nanos:34185633}" Oct 13 05:01:18.080038 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:38652.service - OpenSSH per-connection server daemon (10.0.0.1:38652). Oct 13 05:01:18.159238 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 38652 ssh2: RSA SHA256:hda7tEZiENufrVU/Fi4L6jJcDJNGwI829sWbBxYIz5c Oct 13 05:01:18.161236 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:01:18.164995 systemd-logind[1533]: New session 27 of user core. Oct 13 05:01:18.170422 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:01:18.342602 sshd[5717]: Connection closed by 10.0.0.1 port 38652 Oct 13 05:01:18.342951 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Oct 13 05:01:18.346504 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:38652.service: Deactivated successfully. Oct 13 05:01:18.350757 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:01:18.351766 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:01:18.354810 systemd-logind[1533]: Removed session 27.