Sep 11 23:54:56.781300 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 23:54:56.781321 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 11 22:16:14 -00 2025 Sep 11 23:54:56.781330 kernel: KASLR enabled Sep 11 23:54:56.781335 kernel: efi: EFI v2.7 by EDK II Sep 11 23:54:56.781341 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 23:54:56.781346 kernel: random: crng init done Sep 11 23:54:56.781353 kernel: secureboot: Secure boot disabled Sep 11 23:54:56.781358 kernel: ACPI: Early table checksum verification disabled Sep 11 23:54:56.781364 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 23:54:56.781371 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 23:54:56.781377 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781383 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781388 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781394 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781401 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781408 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781414 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781420 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781426 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:54:56.781432 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 23:54:56.781438 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 23:54:56.781444 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:54:56.781450 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 23:54:56.781456 kernel: Zone ranges: Sep 11 23:54:56.781462 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:54:56.781470 kernel: DMA32 empty Sep 11 23:54:56.781476 kernel: Normal empty Sep 11 23:54:56.781481 kernel: Device empty Sep 11 23:54:56.781487 kernel: Movable zone start for each node Sep 11 23:54:56.781493 kernel: Early memory node ranges Sep 11 23:54:56.781499 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 23:54:56.781505 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 23:54:56.781511 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 23:54:56.781517 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 23:54:56.781523 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 23:54:56.781529 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 23:54:56.781535 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 23:54:56.781543 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 23:54:56.781548 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 23:54:56.781555 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 23:54:56.781563 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 23:54:56.781569 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 23:54:56.781576 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 23:54:56.781583 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:54:56.781590 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 23:54:56.781596 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 23:54:56.781602 kernel: psci: probing for conduit method from ACPI. Sep 11 23:54:56.781608 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 23:54:56.781615 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 23:54:56.781621 kernel: psci: Trusted OS migration not required Sep 11 23:54:56.781627 kernel: psci: SMC Calling Convention v1.1 Sep 11 23:54:56.781634 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 23:54:56.781641 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 23:54:56.781648 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 23:54:56.781655 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 23:54:56.781661 kernel: Detected PIPT I-cache on CPU0 Sep 11 23:54:56.781668 kernel: CPU features: detected: GIC system register CPU interface Sep 11 23:54:56.781674 kernel: CPU features: detected: Spectre-v4 Sep 11 23:54:56.781690 kernel: CPU features: detected: Spectre-BHB Sep 11 23:54:56.781697 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 23:54:56.781704 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 23:54:56.781710 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 23:54:56.781716 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 23:54:56.781723 kernel: alternatives: applying boot alternatives Sep 11 23:54:56.781730 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=34cdae46b43e6281eb14909b07c5254135a938c8cecf4370cc2216c267809c7a Sep 11 23:54:56.781796 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 23:54:56.781803 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 23:54:56.781810 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 23:54:56.781816 kernel: Fallback order for Node 0: 0 Sep 11 23:54:56.781823 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 23:54:56.781829 kernel: Policy zone: DMA Sep 11 23:54:56.781836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 23:54:56.781842 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 23:54:56.781848 kernel: software IO TLB: area num 4. Sep 11 23:54:56.781855 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 23:54:56.781861 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 23:54:56.781870 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 23:54:56.781876 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 23:54:56.781883 kernel: rcu: RCU event tracing is enabled. Sep 11 23:54:56.781890 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 23:54:56.781896 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 23:54:56.781903 kernel: Tracing variant of Tasks RCU enabled. Sep 11 23:54:56.781909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 23:54:56.781916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 23:54:56.781923 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:54:56.781930 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:54:56.781937 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 23:54:56.781944 kernel: GICv3: 256 SPIs implemented Sep 11 23:54:56.781951 kernel: GICv3: 0 Extended SPIs implemented Sep 11 23:54:56.781957 kernel: Root IRQ handler: gic_handle_irq Sep 11 23:54:56.781964 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 23:54:56.781970 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 23:54:56.781977 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 23:54:56.781983 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 23:54:56.781990 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 23:54:56.781996 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 23:54:56.782003 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 23:54:56.782009 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 23:54:56.782016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 23:54:56.782023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:54:56.782030 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 23:54:56.782036 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 23:54:56.782043 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 23:54:56.782049 kernel: arm-pv: using stolen time PV Sep 11 23:54:56.782056 kernel: Console: colour dummy device 80x25 Sep 11 23:54:56.782062 kernel: ACPI: Core revision 20240827 Sep 11 23:54:56.782069 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 23:54:56.782076 kernel: pid_max: default: 32768 minimum: 301 Sep 11 23:54:56.782082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 23:54:56.782090 kernel: landlock: Up and running. Sep 11 23:54:56.782096 kernel: SELinux: Initializing. Sep 11 23:54:56.782103 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:54:56.782109 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:54:56.782116 kernel: rcu: Hierarchical SRCU implementation. Sep 11 23:54:56.782123 kernel: rcu: Max phase no-delay instances is 400. Sep 11 23:54:56.782129 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 23:54:56.782136 kernel: Remapping and enabling EFI services. Sep 11 23:54:56.782142 kernel: smp: Bringing up secondary CPUs ... Sep 11 23:54:56.782155 kernel: Detected PIPT I-cache on CPU1 Sep 11 23:54:56.782162 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 23:54:56.782169 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 23:54:56.782177 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:54:56.782185 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 23:54:56.782191 kernel: Detected PIPT I-cache on CPU2 Sep 11 23:54:56.782199 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 23:54:56.782206 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 23:54:56.782215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:54:56.782222 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 23:54:56.782229 kernel: Detected PIPT I-cache on CPU3 Sep 11 23:54:56.782236 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 23:54:56.782243 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 23:54:56.782250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:54:56.782257 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 23:54:56.782264 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 23:54:56.782271 kernel: SMP: Total of 4 processors activated. Sep 11 23:54:56.782279 kernel: CPU: All CPU(s) started at EL1 Sep 11 23:54:56.782286 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 23:54:56.782293 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 23:54:56.782300 kernel: CPU features: detected: Common not Private translations Sep 11 23:54:56.782307 kernel: CPU features: detected: CRC32 instructions Sep 11 23:54:56.782314 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 23:54:56.782321 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 23:54:56.782328 kernel: CPU features: detected: LSE atomic instructions Sep 11 23:54:56.782335 kernel: CPU features: detected: Privileged Access Never Sep 11 23:54:56.782343 kernel: CPU features: detected: RAS Extension Support Sep 11 23:54:56.782350 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 23:54:56.782357 kernel: alternatives: applying system-wide alternatives Sep 11 23:54:56.782364 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 23:54:56.782372 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 11 23:54:56.782379 kernel: devtmpfs: initialized Sep 11 23:54:56.782386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 23:54:56.782393 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 23:54:56.782400 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 23:54:56.782408 kernel: 0 pages in range for non-PLT usage Sep 11 23:54:56.782416 kernel: 508576 pages in range for PLT usage Sep 11 23:54:56.782422 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 23:54:56.782429 kernel: SMBIOS 3.0.0 present. Sep 11 23:54:56.782436 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 23:54:56.782443 kernel: DMI: Memory slots populated: 1/1 Sep 11 23:54:56.782450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 23:54:56.782458 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 23:54:56.782465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 23:54:56.782473 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 23:54:56.782480 kernel: audit: initializing netlink subsys (disabled) Sep 11 23:54:56.782487 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 11 23:54:56.782494 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 23:54:56.782501 kernel: cpuidle: using governor menu Sep 11 23:54:56.782509 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 23:54:56.782516 kernel: ASID allocator initialised with 32768 entries Sep 11 23:54:56.782522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 23:54:56.782529 kernel: Serial: AMBA PL011 UART driver Sep 11 23:54:56.782538 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 23:54:56.782545 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 23:54:56.782552 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 23:54:56.782559 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 23:54:56.782566 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 23:54:56.782573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 23:54:56.782580 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 23:54:56.782587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 23:54:56.782594 kernel: ACPI: Added _OSI(Module Device) Sep 11 23:54:56.782602 kernel: ACPI: Added _OSI(Processor Device) Sep 11 23:54:56.782609 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 23:54:56.782616 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 23:54:56.782623 kernel: ACPI: Interpreter enabled Sep 11 23:54:56.782630 kernel: ACPI: Using GIC for interrupt routing Sep 11 23:54:56.782637 kernel: ACPI: MCFG table detected, 1 entries Sep 11 23:54:56.782644 kernel: ACPI: CPU0 has been hot-added Sep 11 23:54:56.782651 kernel: ACPI: CPU1 has been hot-added Sep 11 23:54:56.782658 kernel: ACPI: CPU2 has been hot-added Sep 11 23:54:56.782665 kernel: ACPI: CPU3 has been hot-added Sep 11 23:54:56.782673 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 23:54:56.782686 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 23:54:56.782693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 23:54:56.782855 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 23:54:56.782924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 23:54:56.782986 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 23:54:56.783045 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 23:54:56.783105 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 23:54:56.783115 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 23:54:56.783122 kernel: PCI host bridge to bus 0000:00 Sep 11 23:54:56.783187 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 23:54:56.783241 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 23:54:56.783295 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 23:54:56.783347 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 23:54:56.783427 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 23:54:56.783505 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 23:54:56.783565 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 23:54:56.783625 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 23:54:56.783696 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 23:54:56.783778 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 23:54:56.783841 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 23:54:56.783905 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 23:54:56.783959 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 23:54:56.784012 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 23:54:56.784065 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 23:54:56.784075 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 23:54:56.784082 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 23:54:56.784089 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 23:54:56.784098 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 23:54:56.784105 kernel: iommu: Default domain type: Translated Sep 11 23:54:56.784112 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 23:54:56.784119 kernel: efivars: Registered efivars operations Sep 11 23:54:56.784126 kernel: vgaarb: loaded Sep 11 23:54:56.784133 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 23:54:56.784140 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 23:54:56.784147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 23:54:56.784154 kernel: pnp: PnP ACPI init Sep 11 23:54:56.784219 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 23:54:56.784229 kernel: pnp: PnP ACPI: found 1 devices Sep 11 23:54:56.784236 kernel: NET: Registered PF_INET protocol family Sep 11 23:54:56.784243 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 23:54:56.784250 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 23:54:56.784257 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 23:54:56.784264 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 23:54:56.784271 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 23:54:56.784280 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 23:54:56.784287 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:54:56.784294 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:54:56.784301 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 23:54:56.784308 kernel: PCI: CLS 0 bytes, default 64 Sep 11 23:54:56.784315 kernel: kvm [1]: HYP mode not available Sep 11 23:54:56.784322 kernel: Initialise system trusted keyrings Sep 11 23:54:56.784328 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 23:54:56.784335 kernel: Key type asymmetric registered Sep 11 23:54:56.784343 kernel: Asymmetric key parser 'x509' registered Sep 11 23:54:56.784350 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 23:54:56.784357 kernel: io scheduler mq-deadline registered Sep 11 23:54:56.784364 kernel: io scheduler kyber registered Sep 11 23:54:56.784371 kernel: io scheduler bfq registered Sep 11 23:54:56.784378 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 23:54:56.784385 kernel: ACPI: button: Power Button [PWRB] Sep 11 23:54:56.784392 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 23:54:56.784449 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 23:54:56.784460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 23:54:56.784467 kernel: thunder_xcv, ver 1.0 Sep 11 23:54:56.784473 kernel: thunder_bgx, ver 1.0 Sep 11 23:54:56.784480 kernel: nicpf, ver 1.0 Sep 11 23:54:56.784487 kernel: nicvf, ver 1.0 Sep 11 23:54:56.784552 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 23:54:56.784608 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T23:54:56 UTC (1757634896) Sep 11 23:54:56.784618 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 23:54:56.784625 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 23:54:56.784634 kernel: watchdog: NMI not fully supported Sep 11 23:54:56.784641 kernel: watchdog: Hard watchdog permanently disabled Sep 11 23:54:56.784648 kernel: NET: Registered PF_INET6 protocol family Sep 11 23:54:56.784655 kernel: Segment Routing with IPv6 Sep 11 23:54:56.784662 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 23:54:56.784669 kernel: NET: Registered PF_PACKET protocol family Sep 11 23:54:56.784683 kernel: Key type dns_resolver registered Sep 11 23:54:56.784690 kernel: registered taskstats version 1 Sep 11 23:54:56.784698 kernel: Loading compiled-in X.509 certificates Sep 11 23:54:56.784706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c76a2532dfc607285c10ef525f008171185de1e8' Sep 11 23:54:56.784713 kernel: Demotion targets for Node 0: null Sep 11 23:54:56.784720 kernel: Key type .fscrypt registered Sep 11 23:54:56.784727 kernel: Key type fscrypt-provisioning registered Sep 11 23:54:56.784734 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 23:54:56.784750 kernel: ima: Allocated hash algorithm: sha1 Sep 11 23:54:56.784765 kernel: ima: No architecture policies found Sep 11 23:54:56.784773 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 23:54:56.784781 kernel: clk: Disabling unused clocks Sep 11 23:54:56.784789 kernel: PM: genpd: Disabling unused power domains Sep 11 23:54:56.784796 kernel: Warning: unable to open an initial console. Sep 11 23:54:56.784803 kernel: Freeing unused kernel memory: 38912K Sep 11 23:54:56.784810 kernel: Run /init as init process Sep 11 23:54:56.784817 kernel: with arguments: Sep 11 23:54:56.784824 kernel: /init Sep 11 23:54:56.784831 kernel: with environment: Sep 11 23:54:56.784838 kernel: HOME=/ Sep 11 23:54:56.784845 kernel: TERM=linux Sep 11 23:54:56.784853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 23:54:56.784861 systemd[1]: Successfully made /usr/ read-only. Sep 11 23:54:56.784871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:54:56.784879 systemd[1]: Detected virtualization kvm. Sep 11 23:54:56.784886 systemd[1]: Detected architecture arm64. Sep 11 23:54:56.784894 systemd[1]: Running in initrd. Sep 11 23:54:56.784901 systemd[1]: No hostname configured, using default hostname. Sep 11 23:54:56.784910 systemd[1]: Hostname set to . Sep 11 23:54:56.784918 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:54:56.784925 systemd[1]: Queued start job for default target initrd.target. Sep 11 23:54:56.784932 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:54:56.784940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:54:56.784948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 23:54:56.784956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:54:56.784963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 23:54:56.784973 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 23:54:56.784981 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 23:54:56.784989 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 23:54:56.784997 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:54:56.785005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:54:56.785013 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:54:56.785020 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:54:56.785029 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:54:56.785036 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:54:56.785044 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:54:56.785052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:54:56.785059 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 23:54:56.785067 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 23:54:56.785075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:54:56.785082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:54:56.785091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:54:56.785099 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:54:56.785106 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 23:54:56.785114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:54:56.785122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 23:54:56.785130 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 23:54:56.785137 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 23:54:56.785145 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:54:56.785152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:54:56.785161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:54:56.785169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:54:56.785177 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 23:54:56.785185 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 23:54:56.785194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:54:56.785220 systemd-journald[244]: Collecting audit messages is disabled. Sep 11 23:54:56.785239 systemd-journald[244]: Journal started Sep 11 23:54:56.785258 systemd-journald[244]: Runtime Journal (/run/log/journal/9b97cfe2a5d44bdea2fefaff4dbf63f1) is 6M, max 48.5M, 42.4M free. Sep 11 23:54:56.786096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:54:56.776428 systemd-modules-load[246]: Inserted module 'overlay' Sep 11 23:54:56.789458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 23:54:56.792520 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:54:56.792550 kernel: Bridge firewalling registered Sep 11 23:54:56.792999 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 11 23:54:56.793390 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:54:56.795795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:54:56.799824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 23:54:56.801407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:54:56.803242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:54:56.821284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:54:56.832404 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 23:54:56.832407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:54:56.833683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:54:56.837014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:54:56.839199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:54:56.842467 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 23:54:56.844986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:54:56.865681 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=34cdae46b43e6281eb14909b07c5254135a938c8cecf4370cc2216c267809c7a Sep 11 23:54:56.879020 systemd-resolved[288]: Positive Trust Anchors: Sep 11 23:54:56.879040 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:54:56.879070 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:54:56.883787 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 11 23:54:56.884819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:54:56.888498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:54:56.940790 kernel: SCSI subsystem initialized Sep 11 23:54:56.945769 kernel: Loading iSCSI transport class v2.0-870. Sep 11 23:54:56.952766 kernel: iscsi: registered transport (tcp) Sep 11 23:54:56.965979 kernel: iscsi: registered transport (qla4xxx) Sep 11 23:54:56.966016 kernel: QLogic iSCSI HBA Driver Sep 11 23:54:56.981866 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:54:57.008362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:54:57.019817 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:54:57.070395 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 23:54:57.072551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 23:54:57.136810 kernel: raid6: neonx8 gen() 15684 MB/s Sep 11 23:54:57.153762 kernel: raid6: neonx4 gen() 15688 MB/s Sep 11 23:54:57.170784 kernel: raid6: neonx2 gen() 13151 MB/s Sep 11 23:54:57.187755 kernel: raid6: neonx1 gen() 10368 MB/s Sep 11 23:54:57.204757 kernel: raid6: int64x8 gen() 6840 MB/s Sep 11 23:54:57.221756 kernel: raid6: int64x4 gen() 7291 MB/s Sep 11 23:54:57.238760 kernel: raid6: int64x2 gen() 6008 MB/s Sep 11 23:54:57.255756 kernel: raid6: int64x1 gen() 5012 MB/s Sep 11 23:54:57.255777 kernel: raid6: using algorithm neonx4 gen() 15688 MB/s Sep 11 23:54:57.272765 kernel: raid6: .... xor() 12242 MB/s, rmw enabled Sep 11 23:54:57.272788 kernel: raid6: using neon recovery algorithm Sep 11 23:54:57.277986 kernel: xor: measuring software checksum speed Sep 11 23:54:57.278008 kernel: 8regs : 21636 MB/sec Sep 11 23:54:57.279027 kernel: 32regs : 21693 MB/sec Sep 11 23:54:57.279041 kernel: arm64_neon : 28061 MB/sec Sep 11 23:54:57.279051 kernel: xor: using function: arm64_neon (28061 MB/sec) Sep 11 23:54:57.331780 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 23:54:57.338383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:54:57.341667 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:54:57.365886 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 11 23:54:57.369922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:54:57.376887 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 23:54:57.404618 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Sep 11 23:54:57.426585 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:54:57.428672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:54:57.479786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:54:57.481588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 23:54:57.540758 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 23:54:57.540944 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 23:54:57.550657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 23:54:57.550712 kernel: GPT:9289727 != 19775487 Sep 11 23:54:57.552029 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 23:54:57.552061 kernel: GPT:9289727 != 19775487 Sep 11 23:54:57.552071 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 23:54:57.553005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:54:57.555171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:54:57.555290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:54:57.560846 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:54:57.565097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:54:57.587500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 23:54:57.595134 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 23:54:57.603261 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 23:54:57.604537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:54:57.618209 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 23:54:57.619265 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 23:54:57.628039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:54:57.629097 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:54:57.630693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:54:57.632626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:54:57.635028 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 23:54:57.636535 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 23:54:57.660154 disk-uuid[592]: Primary Header is updated. Sep 11 23:54:57.660154 disk-uuid[592]: Secondary Entries is updated. Sep 11 23:54:57.660154 disk-uuid[592]: Secondary Header is updated. Sep 11 23:54:57.664767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:54:57.670997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:54:58.678720 disk-uuid[595]: The operation has completed successfully. Sep 11 23:54:58.679944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:54:58.705619 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 23:54:58.705752 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 23:54:58.727882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 23:54:58.756922 sh[612]: Success Sep 11 23:54:58.769760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 23:54:58.769804 kernel: device-mapper: uevent: version 1.0.3 Sep 11 23:54:58.771171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 23:54:58.778780 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 23:54:58.803451 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 23:54:58.806155 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 23:54:58.838392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 23:54:58.845375 kernel: BTRFS: device fsid 070f11bc-6881-4580-bbfd-8e1bd2605f24 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Sep 11 23:54:58.845408 kernel: BTRFS info (device dm-0): first mount of filesystem 070f11bc-6881-4580-bbfd-8e1bd2605f24 Sep 11 23:54:58.845419 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:54:58.850800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 23:54:58.850817 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 23:54:58.851949 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 23:54:58.853144 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:54:58.854235 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 23:54:58.855055 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 23:54:58.856511 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 23:54:58.880962 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 11 23:54:58.881007 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:54:58.881023 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:54:58.883756 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:54:58.883791 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:54:58.887768 kernel: BTRFS info (device vda6): last unmount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:54:58.889088 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 23:54:58.891013 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 23:54:58.957070 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:54:58.960956 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:54:59.000359 systemd-networkd[800]: lo: Link UP Sep 11 23:54:59.000372 systemd-networkd[800]: lo: Gained carrier Sep 11 23:54:59.001093 ignition[700]: Ignition 2.21.0 Sep 11 23:54:59.001152 systemd-networkd[800]: Enumeration completed Sep 11 23:54:59.001103 ignition[700]: Stage: fetch-offline Sep 11 23:54:59.001261 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:54:59.001364 ignition[700]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:54:59.001524 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:54:59.001376 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:54:59.001527 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:54:59.001692 ignition[700]: parsed url from cmdline: "" Sep 11 23:54:59.002321 systemd-networkd[800]: eth0: Link UP Sep 11 23:54:59.001696 ignition[700]: no config URL provided Sep 11 23:54:59.002458 systemd-networkd[800]: eth0: Gained carrier Sep 11 23:54:59.001701 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 23:54:59.002466 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:54:59.001711 ignition[700]: no config at "/usr/lib/ignition/user.ign" Sep 11 23:54:59.002961 systemd[1]: Reached target network.target - Network. Sep 11 23:54:59.001732 ignition[700]: op(1): [started] loading QEMU firmware config module Sep 11 23:54:59.001749 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 23:54:59.020787 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:54:59.007650 ignition[700]: op(1): [finished] loading QEMU firmware config module Sep 11 23:54:59.060275 ignition[700]: parsing config with SHA512: b2217020452bdf4f99ccf2dc4b671d56cac7f58e6d051b00a121f75d7dee33775a3985dfedaf1c0593238ee150c40162159a48d86abe65b9e2ac40332545b297 Sep 11 23:54:59.064949 unknown[700]: fetched base config from "system" Sep 11 23:54:59.064967 unknown[700]: fetched user config from "qemu" Sep 11 23:54:59.065436 ignition[700]: fetch-offline: fetch-offline passed Sep 11 23:54:59.067468 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:54:59.065496 ignition[700]: Ignition finished successfully Sep 11 23:54:59.068764 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 23:54:59.069506 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 23:54:59.099328 ignition[812]: Ignition 2.21.0 Sep 11 23:54:59.099345 ignition[812]: Stage: kargs Sep 11 23:54:59.099490 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:54:59.099499 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:54:59.101323 ignition[812]: kargs: kargs passed Sep 11 23:54:59.101402 ignition[812]: Ignition finished successfully Sep 11 23:54:59.104726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 23:54:59.106601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 23:54:59.132960 ignition[821]: Ignition 2.21.0 Sep 11 23:54:59.135023 ignition[821]: Stage: disks Sep 11 23:54:59.135216 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:54:59.135229 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:54:59.136700 ignition[821]: disks: disks passed Sep 11 23:54:59.136775 ignition[821]: Ignition finished successfully Sep 11 23:54:59.138394 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 23:54:59.139420 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 23:54:59.140623 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 23:54:59.142678 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:54:59.144335 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:54:59.145689 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:54:59.147851 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 23:54:59.182819 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 23:54:59.188165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 23:54:59.190079 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 23:54:59.251755 kernel: EXT4-fs (vda9): mounted filesystem 358f7642-1e9a-4460-bcb4-1ef3d420e352 r/w with ordered data mode. Quota mode: none. Sep 11 23:54:59.252094 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 23:54:59.253137 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 23:54:59.255828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:54:59.257843 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 23:54:59.258622 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 23:54:59.258661 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 23:54:59.258692 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:54:59.274141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 23:54:59.277888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 23:54:59.281975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 11 23:54:59.282004 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:54:59.282015 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:54:59.283798 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:54:59.283822 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:54:59.285422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:54:59.312865 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 23:54:59.315891 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 11 23:54:59.319993 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 23:54:59.323259 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 23:54:59.398847 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 23:54:59.400574 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 23:54:59.402265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 23:54:59.429222 kernel: BTRFS info (device vda6): last unmount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:54:59.450899 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 23:54:59.466872 ignition[953]: INFO : Ignition 2.21.0 Sep 11 23:54:59.466872 ignition[953]: INFO : Stage: mount Sep 11 23:54:59.469012 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:54:59.469012 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:54:59.471952 ignition[953]: INFO : mount: mount passed Sep 11 23:54:59.471952 ignition[953]: INFO : Ignition finished successfully Sep 11 23:54:59.471261 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 23:54:59.473600 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 23:54:59.844192 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 23:54:59.845730 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:54:59.865759 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 11 23:54:59.867757 kernel: BTRFS info (device vda6): first mount of filesystem 2cbf2c8e-1b28-4a7c-a6d6-f07090d47234 Sep 11 23:54:59.867788 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:54:59.870229 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:54:59.870245 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:54:59.871593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:54:59.899769 ignition[982]: INFO : Ignition 2.21.0 Sep 11 23:54:59.899769 ignition[982]: INFO : Stage: files Sep 11 23:54:59.902845 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:54:59.902845 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:54:59.902845 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 11 23:54:59.902845 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 23:54:59.902845 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 23:54:59.910920 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 23:54:59.910920 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 23:54:59.910920 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 23:54:59.910920 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 11 23:54:59.910920 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 11 23:54:59.905101 unknown[982]: wrote ssh authorized keys file for user: core Sep 11 23:54:59.955577 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 23:55:00.411559 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 11 23:55:00.411559 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:55:00.415262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:55:00.425823 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:55:00.425823 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:55:00.425823 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:55:00.425823 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:55:00.425823 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 11 23:55:00.862608 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 23:55:00.978848 systemd-networkd[800]: eth0: Gained IPv6LL Sep 11 23:55:01.230796 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:55:01.230796 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 23:55:01.233983 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 23:55:01.247306 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:55:01.250574 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:55:01.251838 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 23:55:01.251838 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 23:55:01.251838 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 23:55:01.251838 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:55:01.251838 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:55:01.251838 ignition[982]: INFO : files: files passed Sep 11 23:55:01.251838 ignition[982]: INFO : Ignition finished successfully Sep 11 23:55:01.253200 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 23:55:01.258875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 23:55:01.274206 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 23:55:01.277869 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 23:55:01.277967 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 23:55:01.283092 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 23:55:01.285735 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:55:01.285735 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:55:01.288937 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:55:01.289600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:55:01.291405 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 23:55:01.293882 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 23:55:01.322983 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 23:55:01.323089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 23:55:01.324904 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 23:55:01.326426 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 23:55:01.327826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 23:55:01.328536 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 23:55:01.351468 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:55:01.353633 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 23:55:01.384753 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:55:01.385693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:55:01.387337 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 23:55:01.388652 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 23:55:01.388787 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:55:01.390735 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 23:55:01.392448 systemd[1]: Stopped target basic.target - Basic System. Sep 11 23:55:01.393726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 23:55:01.395082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:55:01.396598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 23:55:01.398210 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:55:01.399640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 23:55:01.401093 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:55:01.402612 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 23:55:01.404180 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 23:55:01.405643 systemd[1]: Stopped target swap.target - Swaps. Sep 11 23:55:01.406856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 23:55:01.406973 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:55:01.408786 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:55:01.410473 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:55:01.411940 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 23:55:01.412836 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:55:01.414478 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 23:55:01.414584 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 23:55:01.416880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 23:55:01.416988 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:55:01.418543 systemd[1]: Stopped target paths.target - Path Units. Sep 11 23:55:01.419856 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 23:55:01.420823 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:55:01.422172 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 23:55:01.423420 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 23:55:01.425079 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 23:55:01.425160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:55:01.426342 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 23:55:01.426417 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:55:01.427587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 23:55:01.427701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:55:01.428985 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 23:55:01.429078 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 23:55:01.431082 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 23:55:01.432087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 23:55:01.432220 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:55:01.434369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 23:55:01.435231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 23:55:01.435347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:55:01.436705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 23:55:01.436832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:55:01.441471 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 23:55:01.447779 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 23:55:01.455062 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 23:55:01.459466 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 23:55:01.460290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 23:55:01.463908 ignition[1037]: INFO : Ignition 2.21.0 Sep 11 23:55:01.463908 ignition[1037]: INFO : Stage: umount Sep 11 23:55:01.463908 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:55:01.463908 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:55:01.466684 ignition[1037]: INFO : umount: umount passed Sep 11 23:55:01.466684 ignition[1037]: INFO : Ignition finished successfully Sep 11 23:55:01.467101 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 23:55:01.467198 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 23:55:01.468243 systemd[1]: Stopped target network.target - Network. Sep 11 23:55:01.470834 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 23:55:01.470889 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 23:55:01.472223 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 23:55:01.472262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 23:55:01.473521 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 23:55:01.473566 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 23:55:01.474846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 23:55:01.474884 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 23:55:01.476201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 23:55:01.476243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 23:55:01.477620 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 23:55:01.479022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 23:55:01.487803 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 23:55:01.487913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 23:55:01.491394 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 23:55:01.491609 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 23:55:01.491711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 23:55:01.494114 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 23:55:01.494684 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 23:55:01.495749 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 23:55:01.495789 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:55:01.499468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 23:55:01.500796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 23:55:01.500855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:55:01.502586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 23:55:01.502628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:55:01.505078 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 23:55:01.505122 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 23:55:01.506814 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 23:55:01.506857 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:55:01.509351 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:55:01.513254 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 23:55:01.513310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 23:55:01.520874 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 23:55:01.521640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 23:55:01.523273 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 23:55:01.523776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:55:01.525163 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 23:55:01.525199 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 23:55:01.526556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 23:55:01.526584 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:55:01.527950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 23:55:01.527988 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:55:01.530062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 23:55:01.530104 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 23:55:01.532249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 23:55:01.532297 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:55:01.535103 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 23:55:01.536045 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 23:55:01.536118 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:55:01.538645 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 23:55:01.538696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:55:01.541184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:55:01.541226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:55:01.544961 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 23:55:01.545015 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 23:55:01.545048 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 23:55:01.549895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 23:55:01.550025 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 23:55:01.551854 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 23:55:01.554008 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 23:55:01.571573 systemd[1]: Switching root. Sep 11 23:55:01.593793 systemd-journald[244]: Journal stopped Sep 11 23:55:02.322315 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 11 23:55:02.322368 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 23:55:02.322383 kernel: SELinux: policy capability open_perms=1 Sep 11 23:55:02.322392 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 23:55:02.322403 kernel: SELinux: policy capability always_check_network=0 Sep 11 23:55:02.322411 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 23:55:02.322420 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 23:55:02.322429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 23:55:02.322438 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 23:55:02.322447 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 23:55:02.322456 kernel: audit: type=1403 audit(1757634901.758:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 23:55:02.322466 systemd[1]: Successfully loaded SELinux policy in 58.883ms. Sep 11 23:55:02.322484 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.178ms. Sep 11 23:55:02.322496 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:55:02.322507 systemd[1]: Detected virtualization kvm. Sep 11 23:55:02.322516 systemd[1]: Detected architecture arm64. Sep 11 23:55:02.322529 systemd[1]: Detected first boot. Sep 11 23:55:02.322538 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:55:02.322548 zram_generator::config[1083]: No configuration found. Sep 11 23:55:02.322559 kernel: NET: Registered PF_VSOCK protocol family Sep 11 23:55:02.322568 systemd[1]: Populated /etc with preset unit settings. Sep 11 23:55:02.322584 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 23:55:02.322594 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 23:55:02.322603 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 23:55:02.322613 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 23:55:02.322623 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 23:55:02.322632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 23:55:02.322642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 23:55:02.322652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 23:55:02.322672 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 23:55:02.322685 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 23:55:02.322695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 23:55:02.322705 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 23:55:02.322714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:55:02.322725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:55:02.322734 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 23:55:02.322790 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 23:55:02.322805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 23:55:02.322817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:55:02.322827 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 23:55:02.322837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:55:02.322847 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:55:02.322857 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 23:55:02.322867 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 23:55:02.322876 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 23:55:02.322886 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 23:55:02.322898 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:55:02.322908 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:55:02.322918 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:55:02.322928 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:55:02.322938 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 23:55:02.322948 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 23:55:02.322958 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 23:55:02.322967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:55:02.322977 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:55:02.322989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:55:02.323000 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 23:55:02.323009 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 23:55:02.323019 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 23:55:02.323029 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 23:55:02.323039 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 23:55:02.323049 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 23:55:02.323059 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 23:55:02.323069 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 23:55:02.323080 systemd[1]: Reached target machines.target - Containers. Sep 11 23:55:02.323090 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 23:55:02.323100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:55:02.323109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:55:02.323119 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 23:55:02.323129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:55:02.323139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:55:02.323148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:55:02.323159 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 23:55:02.323169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:55:02.323179 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 23:55:02.323189 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 23:55:02.323198 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 23:55:02.323208 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 23:55:02.323218 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 23:55:02.323228 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:55:02.323237 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:55:02.323248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:55:02.323258 kernel: fuse: init (API version 7.41) Sep 11 23:55:02.323267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:55:02.323277 kernel: ACPI: bus type drm_connector registered Sep 11 23:55:02.323285 kernel: loop: module loaded Sep 11 23:55:02.323295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 23:55:02.323304 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 23:55:02.323314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:55:02.323326 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 23:55:02.323337 systemd[1]: Stopped verity-setup.service. Sep 11 23:55:02.323347 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 23:55:02.323356 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 23:55:02.323366 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 23:55:02.323403 systemd-journald[1151]: Collecting audit messages is disabled. Sep 11 23:55:02.323429 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 23:55:02.323440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 23:55:02.323450 systemd-journald[1151]: Journal started Sep 11 23:55:02.323474 systemd-journald[1151]: Runtime Journal (/run/log/journal/9b97cfe2a5d44bdea2fefaff4dbf63f1) is 6M, max 48.5M, 42.4M free. Sep 11 23:55:02.123298 systemd[1]: Queued start job for default target multi-user.target. Sep 11 23:55:02.142730 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 23:55:02.143144 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 23:55:02.327770 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:55:02.327412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 23:55:02.329799 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 23:55:02.331025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:55:02.332344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 23:55:02.332505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 23:55:02.333853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:55:02.334012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:55:02.335201 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:55:02.335350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:55:02.336566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:55:02.336758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:55:02.338125 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 23:55:02.338297 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 23:55:02.339508 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:55:02.339674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:55:02.341008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:55:02.342428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:55:02.343950 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 23:55:02.345265 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 23:55:02.356787 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:55:02.358805 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 23:55:02.360696 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 23:55:02.361749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 23:55:02.361777 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:55:02.363342 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 23:55:02.370563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 23:55:02.371638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:55:02.373062 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 23:55:02.374706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 23:55:02.375908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:55:02.377859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 23:55:02.378810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:55:02.381034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:55:02.384944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 23:55:02.386732 systemd-journald[1151]: Time spent on flushing to /var/log/journal/9b97cfe2a5d44bdea2fefaff4dbf63f1 is 31.729ms for 887 entries. Sep 11 23:55:02.386732 systemd-journald[1151]: System Journal (/var/log/journal/9b97cfe2a5d44bdea2fefaff4dbf63f1) is 8M, max 195.6M, 187.6M free. Sep 11 23:55:02.441841 systemd-journald[1151]: Received client request to flush runtime journal. Sep 11 23:55:02.441905 kernel: loop0: detected capacity change from 0 to 203944 Sep 11 23:55:02.441924 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 23:55:02.441939 kernel: loop1: detected capacity change from 0 to 119320 Sep 11 23:55:02.388168 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 23:55:02.392424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:55:02.394123 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 23:55:02.396155 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 23:55:02.402776 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 23:55:02.404864 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 23:55:02.409901 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 23:55:02.420788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:55:02.429219 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 23:55:02.433886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:55:02.446916 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 23:55:02.468765 kernel: loop2: detected capacity change from 0 to 100600 Sep 11 23:55:02.474649 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 11 23:55:02.474677 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 11 23:55:02.478421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:55:02.485770 kernel: loop3: detected capacity change from 0 to 203944 Sep 11 23:55:02.489138 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 23:55:02.494757 kernel: loop4: detected capacity change from 0 to 119320 Sep 11 23:55:02.500760 kernel: loop5: detected capacity change from 0 to 100600 Sep 11 23:55:02.505025 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 23:55:02.505388 (sd-merge)[1220]: Merged extensions into '/usr'. Sep 11 23:55:02.512693 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 23:55:02.512720 systemd[1]: Reloading... Sep 11 23:55:02.566818 zram_generator::config[1248]: No configuration found. Sep 11 23:55:02.682669 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 23:55:02.708030 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 23:55:02.708432 systemd[1]: Reloading finished in 195 ms. Sep 11 23:55:02.736275 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 23:55:02.737445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 23:55:02.748842 systemd[1]: Starting ensure-sysext.service... Sep 11 23:55:02.750320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:55:02.759293 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 11 23:55:02.759308 systemd[1]: Reloading... Sep 11 23:55:02.762790 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 23:55:02.762819 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 23:55:02.763060 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 23:55:02.763244 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 23:55:02.763900 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 23:55:02.764119 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 11 23:55:02.764172 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 11 23:55:02.767775 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:55:02.767785 systemd-tmpfiles[1283]: Skipping /boot Sep 11 23:55:02.773544 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:55:02.773559 systemd-tmpfiles[1283]: Skipping /boot Sep 11 23:55:02.806801 zram_generator::config[1313]: No configuration found. Sep 11 23:55:02.929997 systemd[1]: Reloading finished in 170 ms. Sep 11 23:55:02.953835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 23:55:02.959789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:55:02.966882 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:55:02.969102 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 23:55:02.971377 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 23:55:02.974887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:55:02.977716 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:55:02.979914 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 23:55:02.985779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:55:02.992312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:55:02.995042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:55:02.997389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:55:02.998691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:55:02.998818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:55:03.000977 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 23:55:03.004766 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 23:55:03.006427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:55:03.006573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:55:03.009456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:55:03.010872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:55:03.012460 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:55:03.012602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:55:03.021781 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 23:55:03.029790 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 23:55:03.031160 augenrules[1380]: No rules Sep 11 23:55:03.033105 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:55:03.033322 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:55:03.033729 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 11 23:55:03.035840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:55:03.037030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:55:03.039019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:55:03.040934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:55:03.049105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:55:03.050183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:55:03.050303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:55:03.051431 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 23:55:03.053894 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 23:55:03.055068 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 23:55:03.056797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:55:03.056947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:55:03.058305 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:55:03.058457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:55:03.059641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:55:03.061253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:55:03.061398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:55:03.063035 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 23:55:03.067400 systemd[1]: Finished ensure-sysext.service. Sep 11 23:55:03.101130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:55:03.102233 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:55:03.104235 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 23:55:03.106694 systemd-resolved[1349]: Positive Trust Anchors: Sep 11 23:55:03.107119 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:55:03.107349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:55:03.107776 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:55:03.107883 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:55:03.112629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:55:03.116488 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 11 23:55:03.119925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:55:03.120860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:55:03.150105 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 23:55:03.177338 systemd-networkd[1429]: lo: Link UP Sep 11 23:55:03.177345 systemd-networkd[1429]: lo: Gained carrier Sep 11 23:55:03.178041 systemd-networkd[1429]: Enumeration completed Sep 11 23:55:03.178140 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:55:03.179214 systemd[1]: Reached target network.target - Network. Sep 11 23:55:03.181912 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 23:55:03.185069 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:55:03.185081 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:55:03.185673 systemd-networkd[1429]: eth0: Link UP Sep 11 23:55:03.185807 systemd-networkd[1429]: eth0: Gained carrier Sep 11 23:55:03.185825 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:55:03.186869 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 23:55:03.199579 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 23:55:03.201044 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:55:03.202009 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 23:55:03.202853 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:55:03.203054 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 23:55:03.203362 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Sep 11 23:55:03.204481 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 23:55:03.204620 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 23:55:03.204718 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-09-11 23:55:02.902319 UTC. Sep 11 23:55:03.205558 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 23:55:03.205588 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:55:03.206618 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 23:55:03.207871 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 23:55:03.209007 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 23:55:03.210352 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:55:03.212308 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 23:55:03.214896 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 23:55:03.217524 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 23:55:03.218761 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 23:55:03.219676 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 23:55:03.223358 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 23:55:03.224820 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 23:55:03.226793 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 23:55:03.227984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 23:55:03.237288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:55:03.238500 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:55:03.239303 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:55:03.240164 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:55:03.240190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:55:03.243879 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 23:55:03.248912 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 23:55:03.257621 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 23:55:03.259829 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 23:55:03.261972 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 23:55:03.263405 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 23:55:03.264380 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 23:55:03.266847 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 23:55:03.268532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 23:55:03.270410 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 23:55:03.288264 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 23:55:03.294909 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 23:55:03.296419 jq[1462]: false Sep 11 23:55:03.296504 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 23:55:03.297143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 23:55:03.297936 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 23:55:03.301616 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 23:55:03.305777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 23:55:03.308533 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 23:55:03.308751 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 23:55:03.309898 extend-filesystems[1463]: Found /dev/vda6 Sep 11 23:55:03.312912 extend-filesystems[1463]: Found /dev/vda9 Sep 11 23:55:03.318208 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 23:55:03.318433 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 23:55:03.322320 jq[1478]: true Sep 11 23:55:03.322770 extend-filesystems[1463]: Checking size of /dev/vda9 Sep 11 23:55:03.328425 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 23:55:03.328680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 23:55:03.330909 update_engine[1477]: I20250911 23:55:03.330672 1477 main.cc:92] Flatcar Update Engine starting Sep 11 23:55:03.333177 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 23:55:03.343615 extend-filesystems[1463]: Resized partition /dev/vda9 Sep 11 23:55:03.345068 tar[1481]: linux-arm64/helm Sep 11 23:55:03.345533 jq[1491]: true Sep 11 23:55:03.348395 extend-filesystems[1505]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 23:55:03.354419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:55:03.359578 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 23:55:03.364760 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 23:55:03.374991 dbus-daemon[1457]: [system] SELinux support is enabled Sep 11 23:55:03.375290 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 23:55:03.382048 update_engine[1477]: I20250911 23:55:03.382004 1477 update_check_scheduler.cc:74] Next update check in 11m13s Sep 11 23:55:03.385344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 23:55:03.385371 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 23:55:03.388067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 23:55:03.388090 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 23:55:03.392523 systemd[1]: Started update-engine.service - Update Engine. Sep 11 23:55:03.397989 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 23:55:03.417803 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 23:55:03.440865 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 23:55:03.440865 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 23:55:03.440865 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 23:55:03.445697 extend-filesystems[1463]: Resized filesystem in /dev/vda9 Sep 11 23:55:03.446984 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 23:55:03.449789 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 23:55:03.455759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:55:03.462296 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 11 23:55:03.468248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 23:55:03.472413 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 23:55:03.478222 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 23:55:03.478691 systemd-logind[1471]: New seat seat0. Sep 11 23:55:03.486559 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 23:55:03.488460 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 23:55:03.534441 containerd[1489]: time="2025-09-11T23:55:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 23:55:03.535084 containerd[1489]: time="2025-09-11T23:55:03.535046880Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 23:55:03.545595 containerd[1489]: time="2025-09-11T23:55:03.545511920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.32µs" Sep 11 23:55:03.545595 containerd[1489]: time="2025-09-11T23:55:03.545584800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 23:55:03.545595 containerd[1489]: time="2025-09-11T23:55:03.545603400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 23:55:03.545847 containerd[1489]: time="2025-09-11T23:55:03.545821840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 23:55:03.545875 containerd[1489]: time="2025-09-11T23:55:03.545851080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 23:55:03.545914 containerd[1489]: time="2025-09-11T23:55:03.545878400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:55:03.545953 containerd[1489]: time="2025-09-11T23:55:03.545933640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:55:03.545953 containerd[1489]: time="2025-09-11T23:55:03.545949400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546294 containerd[1489]: time="2025-09-11T23:55:03.546266480Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546294 containerd[1489]: time="2025-09-11T23:55:03.546292280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546339 containerd[1489]: time="2025-09-11T23:55:03.546306080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546339 containerd[1489]: time="2025-09-11T23:55:03.546314280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546418 containerd[1489]: time="2025-09-11T23:55:03.546399080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546784 containerd[1489]: time="2025-09-11T23:55:03.546757120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546826 containerd[1489]: time="2025-09-11T23:55:03.546811840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:55:03.546849 containerd[1489]: time="2025-09-11T23:55:03.546826040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 23:55:03.546869 containerd[1489]: time="2025-09-11T23:55:03.546861960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 23:55:03.547169 containerd[1489]: time="2025-09-11T23:55:03.547147400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 23:55:03.547293 containerd[1489]: time="2025-09-11T23:55:03.547271240Z" level=info msg="metadata content store policy set" policy=shared Sep 11 23:55:03.552578 containerd[1489]: time="2025-09-11T23:55:03.552539840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 23:55:03.552669 containerd[1489]: time="2025-09-11T23:55:03.552613200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 23:55:03.552669 containerd[1489]: time="2025-09-11T23:55:03.552630840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 23:55:03.552669 containerd[1489]: time="2025-09-11T23:55:03.552642280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 23:55:03.552735 containerd[1489]: time="2025-09-11T23:55:03.552717440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 23:55:03.552735 containerd[1489]: time="2025-09-11T23:55:03.552746040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 23:55:03.552797 containerd[1489]: time="2025-09-11T23:55:03.552761280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 23:55:03.552797 containerd[1489]: time="2025-09-11T23:55:03.552773680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 23:55:03.552797 containerd[1489]: time="2025-09-11T23:55:03.552784480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 23:55:03.552797 containerd[1489]: time="2025-09-11T23:55:03.552796600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 23:55:03.552858 containerd[1489]: time="2025-09-11T23:55:03.552809760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 23:55:03.552858 containerd[1489]: time="2025-09-11T23:55:03.552822960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 23:55:03.552974 containerd[1489]: time="2025-09-11T23:55:03.552951520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 23:55:03.553000 containerd[1489]: time="2025-09-11T23:55:03.552979120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 23:55:03.553000 containerd[1489]: time="2025-09-11T23:55:03.552995600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 23:55:03.553038 containerd[1489]: time="2025-09-11T23:55:03.553006040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 23:55:03.553038 containerd[1489]: time="2025-09-11T23:55:03.553016160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 23:55:03.553038 containerd[1489]: time="2025-09-11T23:55:03.553026520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 23:55:03.553038 containerd[1489]: time="2025-09-11T23:55:03.553036800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 23:55:03.553108 containerd[1489]: time="2025-09-11T23:55:03.553046640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 23:55:03.553108 containerd[1489]: time="2025-09-11T23:55:03.553057640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 23:55:03.553108 containerd[1489]: time="2025-09-11T23:55:03.553068040Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 23:55:03.553108 containerd[1489]: time="2025-09-11T23:55:03.553077840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 23:55:03.553294 containerd[1489]: time="2025-09-11T23:55:03.553275600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 23:55:03.553321 containerd[1489]: time="2025-09-11T23:55:03.553296320Z" level=info msg="Start snapshots syncer" Sep 11 23:55:03.553408 containerd[1489]: time="2025-09-11T23:55:03.553387560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 23:55:03.554788 containerd[1489]: time="2025-09-11T23:55:03.554704680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 23:55:03.554911 containerd[1489]: time="2025-09-11T23:55:03.554812920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 23:55:03.554934 containerd[1489]: time="2025-09-11T23:55:03.554911560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 23:55:03.555982 containerd[1489]: time="2025-09-11T23:55:03.555947240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 23:55:03.556022 containerd[1489]: time="2025-09-11T23:55:03.555990920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 23:55:03.556022 containerd[1489]: time="2025-09-11T23:55:03.556004120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 23:55:03.556022 containerd[1489]: time="2025-09-11T23:55:03.556016760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556028400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556039680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556050120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556076400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556086720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556098280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556138280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556152600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:55:03.556159 containerd[1489]: time="2025-09-11T23:55:03.556161400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556171040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556179280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556188280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556199120Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556274760Z" level=info msg="runtime interface created" Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556279480Z" level=info msg="created NRI interface" Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556287480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 23:55:03.556301 containerd[1489]: time="2025-09-11T23:55:03.556299600Z" level=info msg="Connect containerd service" Sep 11 23:55:03.556421 containerd[1489]: time="2025-09-11T23:55:03.556325040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 23:55:03.558749 containerd[1489]: time="2025-09-11T23:55:03.557243040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 23:55:03.624297 containerd[1489]: time="2025-09-11T23:55:03.624231200Z" level=info msg="Start subscribing containerd event" Sep 11 23:55:03.625883 containerd[1489]: time="2025-09-11T23:55:03.625848520Z" level=info msg="Start recovering state" Sep 11 23:55:03.625930 containerd[1489]: time="2025-09-11T23:55:03.624261400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 23:55:03.625999 containerd[1489]: time="2025-09-11T23:55:03.625969080Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 23:55:03.626023 containerd[1489]: time="2025-09-11T23:55:03.625995680Z" level=info msg="Start event monitor" Sep 11 23:55:03.626042 containerd[1489]: time="2025-09-11T23:55:03.626021400Z" level=info msg="Start cni network conf syncer for default" Sep 11 23:55:03.626042 containerd[1489]: time="2025-09-11T23:55:03.626031040Z" level=info msg="Start streaming server" Sep 11 23:55:03.626042 containerd[1489]: time="2025-09-11T23:55:03.626039600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 23:55:03.626101 containerd[1489]: time="2025-09-11T23:55:03.626046200Z" level=info msg="runtime interface starting up..." Sep 11 23:55:03.626101 containerd[1489]: time="2025-09-11T23:55:03.626052160Z" level=info msg="starting plugins..." Sep 11 23:55:03.626101 containerd[1489]: time="2025-09-11T23:55:03.626065480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 23:55:03.626311 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 23:55:03.627723 containerd[1489]: time="2025-09-11T23:55:03.627692720Z" level=info msg="containerd successfully booted in 0.093582s" Sep 11 23:55:03.686613 tar[1481]: linux-arm64/LICENSE Sep 11 23:55:03.686773 tar[1481]: linux-arm64/README.md Sep 11 23:55:03.708976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 23:55:04.112528 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 23:55:04.131325 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 23:55:04.135131 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 23:55:04.170051 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 23:55:04.170285 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 23:55:04.172507 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 23:55:04.192594 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 23:55:04.195398 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 23:55:04.197229 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 23:55:04.198305 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 23:55:04.626874 systemd-networkd[1429]: eth0: Gained IPv6LL Sep 11 23:55:04.632329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 23:55:04.633786 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 23:55:04.635872 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 23:55:04.637935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:04.639619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 23:55:04.656509 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 23:55:04.658012 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 23:55:04.658169 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 23:55:04.659719 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 23:55:05.163797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:05.164991 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 23:55:05.166970 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:55:05.169805 systemd[1]: Startup finished in 2.001s (kernel) + 5.150s (initrd) + 3.470s (userspace) = 10.623s. Sep 11 23:55:05.517158 kubelet[1602]: E0911 23:55:05.517063 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:55:05.519568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:55:05.519720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:55:05.520798 systemd[1]: kubelet.service: Consumed 763ms CPU time, 254.2M memory peak. Sep 11 23:55:09.004767 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 23:55:09.005768 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:40938.service - OpenSSH per-connection server daemon (10.0.0.1:40938). Sep 11 23:55:09.077073 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.078679 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.084326 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 23:55:09.085343 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 23:55:09.090357 systemd-logind[1471]: New session 1 of user core. Sep 11 23:55:09.104040 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 23:55:09.106245 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 23:55:09.117473 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 23:55:09.119432 systemd-logind[1471]: New session c1 of user core. Sep 11 23:55:09.213525 systemd[1620]: Queued start job for default target default.target. Sep 11 23:55:09.232625 systemd[1620]: Created slice app.slice - User Application Slice. Sep 11 23:55:09.232655 systemd[1620]: Reached target paths.target - Paths. Sep 11 23:55:09.232690 systemd[1620]: Reached target timers.target - Timers. Sep 11 23:55:09.233828 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 23:55:09.242843 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 23:55:09.242904 systemd[1620]: Reached target sockets.target - Sockets. Sep 11 23:55:09.242940 systemd[1620]: Reached target basic.target - Basic System. Sep 11 23:55:09.242965 systemd[1620]: Reached target default.target - Main User Target. Sep 11 23:55:09.242989 systemd[1620]: Startup finished in 118ms. Sep 11 23:55:09.243133 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 23:55:09.244302 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 23:55:09.305682 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:40948.service - OpenSSH per-connection server daemon (10.0.0.1:40948). Sep 11 23:55:09.358873 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 40948 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.359866 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.364319 systemd-logind[1471]: New session 2 of user core. Sep 11 23:55:09.383901 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 23:55:09.433414 sshd[1634]: Connection closed by 10.0.0.1 port 40948 Sep 11 23:55:09.433840 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:09.443381 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:40948.service: Deactivated successfully. Sep 11 23:55:09.445960 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 23:55:09.446633 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Sep 11 23:55:09.448674 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:40950.service - OpenSSH per-connection server daemon (10.0.0.1:40950). Sep 11 23:55:09.449378 systemd-logind[1471]: Removed session 2. Sep 11 23:55:09.496182 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 40950 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.497221 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.501277 systemd-logind[1471]: New session 3 of user core. Sep 11 23:55:09.512915 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 23:55:09.558712 sshd[1643]: Connection closed by 10.0.0.1 port 40950 Sep 11 23:55:09.558921 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:09.568472 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:40950.service: Deactivated successfully. Sep 11 23:55:09.570959 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 23:55:09.571605 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Sep 11 23:55:09.573516 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). Sep 11 23:55:09.574250 systemd-logind[1471]: Removed session 3. Sep 11 23:55:09.626325 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.627365 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.630675 systemd-logind[1471]: New session 4 of user core. Sep 11 23:55:09.637854 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 23:55:09.686321 sshd[1652]: Connection closed by 10.0.0.1 port 40952 Sep 11 23:55:09.686867 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:09.695290 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:40952.service: Deactivated successfully. Sep 11 23:55:09.697808 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 23:55:09.698375 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Sep 11 23:55:09.700358 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:40962.service - OpenSSH per-connection server daemon (10.0.0.1:40962). Sep 11 23:55:09.701212 systemd-logind[1471]: Removed session 4. Sep 11 23:55:09.741156 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 40962 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.742215 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.746459 systemd-logind[1471]: New session 5 of user core. Sep 11 23:55:09.754920 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 23:55:09.808583 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 23:55:09.808864 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:55:09.823579 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 11 23:55:09.824775 sshd[1661]: Connection closed by 10.0.0.1 port 40962 Sep 11 23:55:09.825219 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:09.831403 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:40962.service: Deactivated successfully. Sep 11 23:55:09.833965 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 23:55:09.834492 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Sep 11 23:55:09.836254 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). Sep 11 23:55:09.836900 systemd-logind[1471]: Removed session 5. Sep 11 23:55:09.884553 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:09.885825 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:09.891887 systemd-logind[1471]: New session 6 of user core. Sep 11 23:55:09.904941 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 23:55:09.956073 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 23:55:09.956321 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:55:10.035049 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 11 23:55:10.039756 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 23:55:10.040002 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:55:10.047718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:55:10.079427 augenrules[1695]: No rules Sep 11 23:55:10.080641 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:55:10.081876 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:55:10.083093 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 11 23:55:10.084208 sshd[1671]: Connection closed by 10.0.0.1 port 40972 Sep 11 23:55:10.085893 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:10.095520 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:40972.service: Deactivated successfully. Sep 11 23:55:10.097139 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 23:55:10.097794 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Sep 11 23:55:10.100458 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:45444.service - OpenSSH per-connection server daemon (10.0.0.1:45444). Sep 11 23:55:10.101344 systemd-logind[1471]: Removed session 6. Sep 11 23:55:10.154844 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 45444 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:55:10.156059 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:55:10.160288 systemd-logind[1471]: New session 7 of user core. Sep 11 23:55:10.165918 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 23:55:10.216093 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 23:55:10.216645 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:55:10.481537 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 23:55:10.509055 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 23:55:10.728677 dockerd[1729]: time="2025-09-11T23:55:10.728361201Z" level=info msg="Starting up" Sep 11 23:55:10.730134 dockerd[1729]: time="2025-09-11T23:55:10.730090887Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 23:55:10.743609 dockerd[1729]: time="2025-09-11T23:55:10.743518879Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 23:55:10.773455 dockerd[1729]: time="2025-09-11T23:55:10.773419036Z" level=info msg="Loading containers: start." Sep 11 23:55:10.783756 kernel: Initializing XFRM netlink socket Sep 11 23:55:11.010314 systemd-networkd[1429]: docker0: Link UP Sep 11 23:55:11.014394 dockerd[1729]: time="2025-09-11T23:55:11.014179706Z" level=info msg="Loading containers: done." Sep 11 23:55:11.028516 dockerd[1729]: time="2025-09-11T23:55:11.028471122Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 23:55:11.028637 dockerd[1729]: time="2025-09-11T23:55:11.028547413Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 23:55:11.028637 dockerd[1729]: time="2025-09-11T23:55:11.028618305Z" level=info msg="Initializing buildkit" Sep 11 23:55:11.051650 dockerd[1729]: time="2025-09-11T23:55:11.051604028Z" level=info msg="Completed buildkit initialization" Sep 11 23:55:11.058122 dockerd[1729]: time="2025-09-11T23:55:11.058074504Z" level=info msg="Daemon has completed initialization" Sep 11 23:55:11.058212 dockerd[1729]: time="2025-09-11T23:55:11.058162893Z" level=info msg="API listen on /run/docker.sock" Sep 11 23:55:11.058323 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 23:55:11.605218 containerd[1489]: time="2025-09-11T23:55:11.605185235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 23:55:12.261315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3315196116.mount: Deactivated successfully. Sep 11 23:55:13.090383 containerd[1489]: time="2025-09-11T23:55:13.090317166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:13.091563 containerd[1489]: time="2025-09-11T23:55:13.091531732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 11 23:55:13.092520 containerd[1489]: time="2025-09-11T23:55:13.092493284Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:13.094916 containerd[1489]: time="2025-09-11T23:55:13.094866576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:13.095897 containerd[1489]: time="2025-09-11T23:55:13.095834495Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.490611593s" Sep 11 23:55:13.095897 containerd[1489]: time="2025-09-11T23:55:13.095866329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 11 23:55:13.097002 containerd[1489]: time="2025-09-11T23:55:13.096964472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 23:55:14.114874 containerd[1489]: time="2025-09-11T23:55:14.114810633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:14.115599 containerd[1489]: time="2025-09-11T23:55:14.115551555Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 11 23:55:14.116281 containerd[1489]: time="2025-09-11T23:55:14.116248082Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:14.119280 containerd[1489]: time="2025-09-11T23:55:14.119244482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:14.120460 containerd[1489]: time="2025-09-11T23:55:14.120250186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.023183438s" Sep 11 23:55:14.120460 containerd[1489]: time="2025-09-11T23:55:14.120278620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 11 23:55:14.120802 containerd[1489]: time="2025-09-11T23:55:14.120711871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 23:55:15.143177 containerd[1489]: time="2025-09-11T23:55:15.143089637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:15.143691 containerd[1489]: time="2025-09-11T23:55:15.143659001Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 11 23:55:15.144560 containerd[1489]: time="2025-09-11T23:55:15.144526590Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:15.147082 containerd[1489]: time="2025-09-11T23:55:15.147050290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:15.148067 containerd[1489]: time="2025-09-11T23:55:15.148042465Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.027300566s" Sep 11 23:55:15.148112 containerd[1489]: time="2025-09-11T23:55:15.148072046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 11 23:55:15.148783 containerd[1489]: time="2025-09-11T23:55:15.148703149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 23:55:15.770108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 23:55:15.772554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:15.924926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:15.929051 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:55:15.973619 kubelet[2025]: E0911 23:55:15.973569 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:55:15.978560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:55:15.978681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:55:15.979861 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.2M memory peak. Sep 11 23:55:16.099968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355495594.mount: Deactivated successfully. Sep 11 23:55:16.598950 containerd[1489]: time="2025-09-11T23:55:16.598828187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:16.599894 containerd[1489]: time="2025-09-11T23:55:16.599864878Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 11 23:55:16.600961 containerd[1489]: time="2025-09-11T23:55:16.600910381Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:16.603331 containerd[1489]: time="2025-09-11T23:55:16.603168585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:16.603791 containerd[1489]: time="2025-09-11T23:55:16.603767552Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.455029459s" Sep 11 23:55:16.603866 containerd[1489]: time="2025-09-11T23:55:16.603852937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 11 23:55:16.604413 containerd[1489]: time="2025-09-11T23:55:16.604314756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 23:55:17.060426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437033408.mount: Deactivated successfully. Sep 11 23:55:17.642788 containerd[1489]: time="2025-09-11T23:55:17.642310559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:17.643112 containerd[1489]: time="2025-09-11T23:55:17.643016627Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 11 23:55:17.643654 containerd[1489]: time="2025-09-11T23:55:17.643603970Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:17.646894 containerd[1489]: time="2025-09-11T23:55:17.646838051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:17.647470 containerd[1489]: time="2025-09-11T23:55:17.647442957Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.043099134s" Sep 11 23:55:17.647521 containerd[1489]: time="2025-09-11T23:55:17.647476849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 11 23:55:17.647969 containerd[1489]: time="2025-09-11T23:55:17.647943640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 23:55:18.048279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777921325.mount: Deactivated successfully. Sep 11 23:55:18.055164 containerd[1489]: time="2025-09-11T23:55:18.055110866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 23:55:18.055276 containerd[1489]: time="2025-09-11T23:55:18.055199705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:55:18.056759 containerd[1489]: time="2025-09-11T23:55:18.056185926Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:55:18.058770 containerd[1489]: time="2025-09-11T23:55:18.058718445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:55:18.059424 containerd[1489]: time="2025-09-11T23:55:18.059394324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 411.418135ms" Sep 11 23:55:18.059500 containerd[1489]: time="2025-09-11T23:55:18.059486583Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 23:55:18.060044 containerd[1489]: time="2025-09-11T23:55:18.059981721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 23:55:18.527480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434576481.mount: Deactivated successfully. Sep 11 23:55:20.449632 containerd[1489]: time="2025-09-11T23:55:20.449574731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:20.450281 containerd[1489]: time="2025-09-11T23:55:20.450245645Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 11 23:55:20.451235 containerd[1489]: time="2025-09-11T23:55:20.451198974Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:20.454026 containerd[1489]: time="2025-09-11T23:55:20.453990665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:20.455177 containerd[1489]: time="2025-09-11T23:55:20.455149831Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.394940634s" Sep 11 23:55:20.455177 containerd[1489]: time="2025-09-11T23:55:20.455175277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 11 23:55:25.888241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:25.888374 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.2M memory peak. Sep 11 23:55:25.890223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:25.909191 systemd[1]: Reload requested from client PID 2178 ('systemctl') (unit session-7.scope)... Sep 11 23:55:25.909206 systemd[1]: Reloading... Sep 11 23:55:25.981792 zram_generator::config[2221]: No configuration found. Sep 11 23:55:26.151615 systemd[1]: Reloading finished in 242 ms. Sep 11 23:55:26.201461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:26.203525 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:26.205128 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 23:55:26.206782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:26.206816 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 11 23:55:26.208095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:26.344498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:26.347830 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:55:26.381772 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:55:26.381772 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 23:55:26.381772 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:55:26.381772 kubelet[2268]: I0911 23:55:26.381386 2268 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:55:28.091175 kubelet[2268]: I0911 23:55:28.091124 2268 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 23:55:28.091175 kubelet[2268]: I0911 23:55:28.091160 2268 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:55:28.091523 kubelet[2268]: I0911 23:55:28.091402 2268 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 23:55:28.110369 kubelet[2268]: E0911 23:55:28.110311 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:28.111692 kubelet[2268]: I0911 23:55:28.111653 2268 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:55:28.119716 kubelet[2268]: I0911 23:55:28.119692 2268 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:55:28.123328 kubelet[2268]: I0911 23:55:28.123307 2268 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:55:28.124111 kubelet[2268]: I0911 23:55:28.124073 2268 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 23:55:28.124269 kubelet[2268]: I0911 23:55:28.124238 2268 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:55:28.124458 kubelet[2268]: I0911 23:55:28.124272 2268 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:55:28.124544 kubelet[2268]: I0911 23:55:28.124511 2268 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:55:28.124544 kubelet[2268]: I0911 23:55:28.124520 2268 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 23:55:28.124814 kubelet[2268]: I0911 23:55:28.124780 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:55:28.126723 kubelet[2268]: I0911 23:55:28.126701 2268 kubelet.go:408] "Attempting to sync node with API server" Sep 11 23:55:28.126786 kubelet[2268]: I0911 23:55:28.126733 2268 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:55:28.126786 kubelet[2268]: I0911 23:55:28.126775 2268 kubelet.go:314] "Adding apiserver pod source" Sep 11 23:55:28.126867 kubelet[2268]: I0911 23:55:28.126853 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:55:28.132762 kubelet[2268]: W0911 23:55:28.130789 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 11 23:55:28.132762 kubelet[2268]: E0911 23:55:28.130934 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:28.132762 kubelet[2268]: W0911 23:55:28.131029 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 11 23:55:28.132762 kubelet[2268]: E0911 23:55:28.131063 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:28.132762 kubelet[2268]: I0911 23:55:28.131868 2268 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 23:55:28.132762 kubelet[2268]: I0911 23:55:28.132642 2268 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:55:28.132969 kubelet[2268]: W0911 23:55:28.132883 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 23:55:28.134863 kubelet[2268]: I0911 23:55:28.134835 2268 server.go:1274] "Started kubelet" Sep 11 23:55:28.135648 kubelet[2268]: I0911 23:55:28.135599 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:55:28.135970 kubelet[2268]: I0911 23:55:28.135945 2268 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:55:28.136085 kubelet[2268]: I0911 23:55:28.136061 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:55:28.136085 kubelet[2268]: I0911 23:55:28.136073 2268 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:55:28.136572 kubelet[2268]: I0911 23:55:28.136558 2268 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 23:55:28.138955 kubelet[2268]: I0911 23:55:28.137069 2268 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:55:28.139064 kubelet[2268]: I0911 23:55:28.139047 2268 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:55:28.139092 kubelet[2268]: E0911 23:55:28.137498 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:55:28.139092 kubelet[2268]: I0911 23:55:28.138927 2268 server.go:449] "Adding debug handlers to kubelet server" Sep 11 23:55:28.139365 kubelet[2268]: E0911 23:55:28.138210 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18645fa7a04b50fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 23:55:28.134803709 +0000 UTC m=+1.784234384,LastTimestamp:2025-09-11 23:55:28.134803709 +0000 UTC m=+1.784234384,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 23:55:28.139574 kubelet[2268]: E0911 23:55:28.139519 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Sep 11 23:55:28.139928 kubelet[2268]: W0911 23:55:28.139788 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 11 23:55:28.139928 kubelet[2268]: E0911 23:55:28.139848 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:28.139928 kubelet[2268]: I0911 23:55:28.137297 2268 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 23:55:28.140058 kubelet[2268]: I0911 23:55:28.139981 2268 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:55:28.140099 kubelet[2268]: I0911 23:55:28.140076 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:55:28.141320 kubelet[2268]: E0911 23:55:28.141292 2268 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:55:28.141613 kubelet[2268]: I0911 23:55:28.141597 2268 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:55:28.153880 kubelet[2268]: I0911 23:55:28.153540 2268 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 23:55:28.153880 kubelet[2268]: I0911 23:55:28.153558 2268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 23:55:28.153880 kubelet[2268]: I0911 23:55:28.153600 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:55:28.155520 kubelet[2268]: I0911 23:55:28.155489 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:55:28.156682 kubelet[2268]: I0911 23:55:28.156651 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:55:28.156682 kubelet[2268]: I0911 23:55:28.156681 2268 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 23:55:28.156817 kubelet[2268]: I0911 23:55:28.156699 2268 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 23:55:28.156817 kubelet[2268]: E0911 23:55:28.156775 2268 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:55:28.225875 kubelet[2268]: I0911 23:55:28.225832 2268 policy_none.go:49] "None policy: Start" Sep 11 23:55:28.226079 kubelet[2268]: W0911 23:55:28.226019 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 11 23:55:28.226131 kubelet[2268]: E0911 23:55:28.226079 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:28.226761 kubelet[2268]: I0911 23:55:28.226586 2268 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 23:55:28.226761 kubelet[2268]: I0911 23:55:28.226627 2268 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:55:28.235843 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 23:55:28.240009 kubelet[2268]: E0911 23:55:28.239983 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:55:28.252504 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 23:55:28.255658 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 23:55:28.257636 kubelet[2268]: E0911 23:55:28.257609 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 23:55:28.264047 kubelet[2268]: I0911 23:55:28.263665 2268 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:55:28.264047 kubelet[2268]: I0911 23:55:28.263878 2268 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:55:28.264047 kubelet[2268]: I0911 23:55:28.263890 2268 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:55:28.264628 kubelet[2268]: I0911 23:55:28.264097 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:55:28.265247 kubelet[2268]: E0911 23:55:28.265112 2268 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 23:55:28.341002 kubelet[2268]: E0911 23:55:28.340940 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Sep 11 23:55:28.366004 kubelet[2268]: I0911 23:55:28.365908 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:55:28.369108 kubelet[2268]: E0911 23:55:28.368829 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 11 23:55:28.467550 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 23:55:28.484994 systemd[1]: Created slice kubepods-burstable-podc4114d5e412390a82f115c591a3600c0.slice - libcontainer container kubepods-burstable-podc4114d5e412390a82f115c591a3600c0.slice. Sep 11 23:55:28.494934 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 23:55:28.541841 kubelet[2268]: I0911 23:55:28.541800 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:55:28.541841 kubelet[2268]: I0911 23:55:28.541843 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:28.541975 kubelet[2268]: I0911 23:55:28.541865 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:28.541975 kubelet[2268]: I0911 23:55:28.541881 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:28.541975 kubelet[2268]: I0911 23:55:28.541898 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:28.541975 kubelet[2268]: I0911 23:55:28.541913 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:28.541975 kubelet[2268]: I0911 23:55:28.541925 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:28.542079 kubelet[2268]: I0911 23:55:28.541938 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:28.542079 kubelet[2268]: I0911 23:55:28.541952 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:28.570881 kubelet[2268]: I0911 23:55:28.570853 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:55:28.571226 kubelet[2268]: E0911 23:55:28.571194 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 11 23:55:28.741913 kubelet[2268]: E0911 23:55:28.741734 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Sep 11 23:55:28.782839 containerd[1489]: time="2025-09-11T23:55:28.782729061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 23:55:28.795124 containerd[1489]: time="2025-09-11T23:55:28.795066323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c4114d5e412390a82f115c591a3600c0,Namespace:kube-system,Attempt:0,}" Sep 11 23:55:28.798067 containerd[1489]: time="2025-09-11T23:55:28.798007328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 23:55:28.798476 containerd[1489]: time="2025-09-11T23:55:28.798447892Z" level=info msg="connecting to shim 56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c" address="unix:///run/containerd/s/b102014ef7ab3509c5b52e7443cdbac059ecae1b4e2bb8abf3b47347de98ea96" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:28.828506 containerd[1489]: time="2025-09-11T23:55:28.828456307Z" level=info msg="connecting to shim ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81" address="unix:///run/containerd/s/a6819d8b2a1d5b492142ebb4dfd3596e9c32a70aa2802cc75d1c337271ebd1b6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:28.828894 systemd[1]: Started cri-containerd-56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c.scope - libcontainer container 56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c. Sep 11 23:55:28.829055 containerd[1489]: time="2025-09-11T23:55:28.829031783Z" level=info msg="connecting to shim bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5" address="unix:///run/containerd/s/244b4b93cf344e9fa6d89d37bba57d00c3ce442c3b57d3e7de3862ec68019806" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:28.858962 systemd[1]: Started cri-containerd-bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5.scope - libcontainer container bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5. Sep 11 23:55:28.861906 systemd[1]: Started cri-containerd-ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81.scope - libcontainer container ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81. Sep 11 23:55:28.864889 containerd[1489]: time="2025-09-11T23:55:28.864850000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c\"" Sep 11 23:55:28.872715 containerd[1489]: time="2025-09-11T23:55:28.872684014Z" level=info msg="CreateContainer within sandbox \"56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 23:55:28.882175 containerd[1489]: time="2025-09-11T23:55:28.882139460Z" level=info msg="Container 8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:28.891486 containerd[1489]: time="2025-09-11T23:55:28.891432474Z" level=info msg="CreateContainer within sandbox \"56c91e2c2dac6d11094a516d90787cc5d3196a092f12cb588a3a60567bf51f9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928\"" Sep 11 23:55:28.893084 containerd[1489]: time="2025-09-11T23:55:28.892065023Z" level=info msg="StartContainer for \"8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928\"" Sep 11 23:55:28.893327 containerd[1489]: time="2025-09-11T23:55:28.893296732Z" level=info msg="connecting to shim 8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928" address="unix:///run/containerd/s/b102014ef7ab3509c5b52e7443cdbac059ecae1b4e2bb8abf3b47347de98ea96" protocol=ttrpc version=3 Sep 11 23:55:28.900419 containerd[1489]: time="2025-09-11T23:55:28.900363165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5\"" Sep 11 23:55:28.903976 containerd[1489]: time="2025-09-11T23:55:28.903945865Z" level=info msg="CreateContainer within sandbox \"bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 23:55:28.911968 containerd[1489]: time="2025-09-11T23:55:28.911213669Z" level=info msg="Container 0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:28.915628 containerd[1489]: time="2025-09-11T23:55:28.915584679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c4114d5e412390a82f115c591a3600c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81\"" Sep 11 23:55:28.918002 systemd[1]: Started cri-containerd-8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928.scope - libcontainer container 8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928. Sep 11 23:55:28.919274 containerd[1489]: time="2025-09-11T23:55:28.919237312Z" level=info msg="CreateContainer within sandbox \"bc038008864ca4270f1de93d87c6f08ea502398c2c6589d0a5ea7ab467084fa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091\"" Sep 11 23:55:28.919867 containerd[1489]: time="2025-09-11T23:55:28.919842782Z" level=info msg="StartContainer for \"0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091\"" Sep 11 23:55:28.920877 containerd[1489]: time="2025-09-11T23:55:28.920853311Z" level=info msg="connecting to shim 0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091" address="unix:///run/containerd/s/244b4b93cf344e9fa6d89d37bba57d00c3ce442c3b57d3e7de3862ec68019806" protocol=ttrpc version=3 Sep 11 23:55:28.921957 containerd[1489]: time="2025-09-11T23:55:28.921924227Z" level=info msg="CreateContainer within sandbox \"ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 23:55:28.932478 containerd[1489]: time="2025-09-11T23:55:28.932091300Z" level=info msg="Container 62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:28.939576 containerd[1489]: time="2025-09-11T23:55:28.939539587Z" level=info msg="CreateContainer within sandbox \"ee76f921c463029fc9d3c92b5e034b1735c3e8c003f775ca93c585904c963e81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e\"" Sep 11 23:55:28.940108 containerd[1489]: time="2025-09-11T23:55:28.940088344Z" level=info msg="StartContainer for \"62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e\"" Sep 11 23:55:28.941560 containerd[1489]: time="2025-09-11T23:55:28.941523701Z" level=info msg="connecting to shim 62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e" address="unix:///run/containerd/s/a6819d8b2a1d5b492142ebb4dfd3596e9c32a70aa2802cc75d1c337271ebd1b6" protocol=ttrpc version=3 Sep 11 23:55:28.945030 systemd[1]: Started cri-containerd-0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091.scope - libcontainer container 0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091. Sep 11 23:55:28.961616 containerd[1489]: time="2025-09-11T23:55:28.961519406Z" level=info msg="StartContainer for \"8331b424fc4eb7d3d8235dfb1ecf5b0ae9a89af97c600087962a166754c17928\" returns successfully" Sep 11 23:55:28.968891 systemd[1]: Started cri-containerd-62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e.scope - libcontainer container 62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e. Sep 11 23:55:28.972904 kubelet[2268]: I0911 23:55:28.972868 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:55:28.973329 kubelet[2268]: E0911 23:55:28.973245 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Sep 11 23:55:29.000451 containerd[1489]: time="2025-09-11T23:55:29.000325556Z" level=info msg="StartContainer for \"0a129165cf79b4d60e6101714e4300c20af112e32e4087d028d5f17e648bb091\" returns successfully" Sep 11 23:55:29.012732 containerd[1489]: time="2025-09-11T23:55:29.012689720Z" level=info msg="StartContainer for \"62feae224320abfe7d1777d03ad51504c98736ebfc22224d157aa783684c834e\" returns successfully" Sep 11 23:55:29.068004 kubelet[2268]: W0911 23:55:29.067924 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.129:6443: connect: connection refused Sep 11 23:55:29.068004 kubelet[2268]: E0911 23:55:29.068005 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:55:29.774669 kubelet[2268]: I0911 23:55:29.774631 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:55:30.744621 kubelet[2268]: E0911 23:55:30.744582 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 23:55:30.796257 kubelet[2268]: I0911 23:55:30.796215 2268 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 23:55:30.796257 kubelet[2268]: E0911 23:55:30.796264 2268 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 23:55:31.128670 kubelet[2268]: I0911 23:55:31.128607 2268 apiserver.go:52] "Watching apiserver" Sep 11 23:55:31.140441 kubelet[2268]: I0911 23:55:31.140358 2268 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 23:55:32.783233 systemd[1]: Reload requested from client PID 2548 ('systemctl') (unit session-7.scope)... Sep 11 23:55:32.783246 systemd[1]: Reloading... Sep 11 23:55:32.852821 zram_generator::config[2591]: No configuration found. Sep 11 23:55:33.060486 kubelet[2268]: E0911 23:55:33.060336 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:33.091261 systemd[1]: Reloading finished in 307 ms. Sep 11 23:55:33.116953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:33.130487 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 23:55:33.130694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:33.130759 systemd[1]: kubelet.service: Consumed 2.143s CPU time, 128.1M memory peak. Sep 11 23:55:33.132873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:55:33.263616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:55:33.267594 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:55:33.311844 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:55:33.311844 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 23:55:33.311844 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:55:33.311844 kubelet[2633]: I0911 23:55:33.311355 2633 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:55:33.316373 kubelet[2633]: I0911 23:55:33.316328 2633 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 23:55:33.316373 kubelet[2633]: I0911 23:55:33.316358 2633 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:55:33.316600 kubelet[2633]: I0911 23:55:33.316574 2633 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 23:55:33.317897 kubelet[2633]: I0911 23:55:33.317865 2633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 23:55:33.320611 kubelet[2633]: I0911 23:55:33.320579 2633 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:55:33.325211 kubelet[2633]: I0911 23:55:33.325189 2633 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:55:33.327766 kubelet[2633]: I0911 23:55:33.327432 2633 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:55:33.327766 kubelet[2633]: I0911 23:55:33.327550 2633 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 23:55:33.327766 kubelet[2633]: I0911 23:55:33.327650 2633 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:55:33.327879 kubelet[2633]: I0911 23:55:33.327672 2633 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:55:33.327952 kubelet[2633]: I0911 23:55:33.327882 2633 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:55:33.327952 kubelet[2633]: I0911 23:55:33.327891 2633 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 23:55:33.327952 kubelet[2633]: I0911 23:55:33.327923 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:55:33.328026 kubelet[2633]: I0911 23:55:33.328014 2633 kubelet.go:408] "Attempting to sync node with API server" Sep 11 23:55:33.328051 kubelet[2633]: I0911 23:55:33.328028 2633 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:55:33.328051 kubelet[2633]: I0911 23:55:33.328045 2633 kubelet.go:314] "Adding apiserver pod source" Sep 11 23:55:33.328132 kubelet[2633]: I0911 23:55:33.328056 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:55:33.328489 kubelet[2633]: I0911 23:55:33.328455 2633 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 23:55:33.331749 kubelet[2633]: I0911 23:55:33.330942 2633 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:55:33.331749 kubelet[2633]: I0911 23:55:33.331300 2633 server.go:1274] "Started kubelet" Sep 11 23:55:33.332439 kubelet[2633]: I0911 23:55:33.332402 2633 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:55:33.333317 kubelet[2633]: I0911 23:55:33.333298 2633 server.go:449] "Adding debug handlers to kubelet server" Sep 11 23:55:33.333929 kubelet[2633]: I0911 23:55:33.333878 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:55:33.334098 kubelet[2633]: I0911 23:55:33.334079 2633 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:55:33.336038 kubelet[2633]: I0911 23:55:33.336018 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:55:33.338498 kubelet[2633]: I0911 23:55:33.336906 2633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:55:33.338498 kubelet[2633]: I0911 23:55:33.338411 2633 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 23:55:33.338498 kubelet[2633]: E0911 23:55:33.338460 2633 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:55:33.339087 kubelet[2633]: E0911 23:55:33.338835 2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:55:33.339299 kubelet[2633]: I0911 23:55:33.339175 2633 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 23:55:33.339351 kubelet[2633]: I0911 23:55:33.339344 2633 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:55:33.348434 kubelet[2633]: I0911 23:55:33.348081 2633 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:55:33.348434 kubelet[2633]: I0911 23:55:33.348229 2633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:55:33.357543 kubelet[2633]: I0911 23:55:33.357518 2633 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:55:33.357727 kubelet[2633]: I0911 23:55:33.357690 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:55:33.358836 kubelet[2633]: I0911 23:55:33.358808 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:55:33.358836 kubelet[2633]: I0911 23:55:33.358833 2633 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 23:55:33.358933 kubelet[2633]: I0911 23:55:33.358849 2633 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 23:55:33.358933 kubelet[2633]: E0911 23:55:33.358891 2633 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:55:33.390088 kubelet[2633]: I0911 23:55:33.390059 2633 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 23:55:33.390088 kubelet[2633]: I0911 23:55:33.390079 2633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 23:55:33.390088 kubelet[2633]: I0911 23:55:33.390098 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:55:33.390290 kubelet[2633]: I0911 23:55:33.390270 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 23:55:33.390366 kubelet[2633]: I0911 23:55:33.390296 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 23:55:33.390396 kubelet[2633]: I0911 23:55:33.390372 2633 policy_none.go:49] "None policy: Start" Sep 11 23:55:33.390977 kubelet[2633]: I0911 23:55:33.390954 2633 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 23:55:33.391035 kubelet[2633]: I0911 23:55:33.390981 2633 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:55:33.391124 kubelet[2633]: I0911 23:55:33.391110 2633 state_mem.go:75] "Updated machine memory state" Sep 11 23:55:33.394827 kubelet[2633]: I0911 23:55:33.394793 2633 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:55:33.395365 kubelet[2633]: I0911 23:55:33.395348 2633 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:55:33.395464 kubelet[2633]: I0911 23:55:33.395438 2633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:55:33.396077 kubelet[2633]: I0911 23:55:33.396048 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:55:33.465050 kubelet[2633]: E0911 23:55:33.464957 2633 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 23:55:33.499299 kubelet[2633]: I0911 23:55:33.499277 2633 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:55:33.505526 kubelet[2633]: I0911 23:55:33.505058 2633 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 23:55:33.505526 kubelet[2633]: I0911 23:55:33.505130 2633 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 23:55:33.642121 kubelet[2633]: I0911 23:55:33.641015 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:33.642121 kubelet[2633]: I0911 23:55:33.641057 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:33.642121 kubelet[2633]: I0911 23:55:33.641077 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:55:33.642121 kubelet[2633]: I0911 23:55:33.641096 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:33.642121 kubelet[2633]: I0911 23:55:33.641114 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4114d5e412390a82f115c591a3600c0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c4114d5e412390a82f115c591a3600c0\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:33.642321 kubelet[2633]: I0911 23:55:33.641140 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:33.642321 kubelet[2633]: I0911 23:55:33.641180 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:33.642321 kubelet[2633]: I0911 23:55:33.641216 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:33.642321 kubelet[2633]: I0911 23:55:33.641235 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:55:33.766202 kubelet[2633]: E0911 23:55:33.766142 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:33.766202 kubelet[2633]: E0911 23:55:33.766178 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:33.766996 kubelet[2633]: E0911 23:55:33.766362 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:34.328540 kubelet[2633]: I0911 23:55:34.328476 2633 apiserver.go:52] "Watching apiserver" Sep 11 23:55:34.339939 kubelet[2633]: I0911 23:55:34.339872 2633 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 23:55:34.373090 kubelet[2633]: E0911 23:55:34.373058 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:34.373501 kubelet[2633]: E0911 23:55:34.373461 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:34.385151 kubelet[2633]: E0911 23:55:34.385057 2633 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 23:55:34.385344 kubelet[2633]: E0911 23:55:34.385324 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:34.448099 kubelet[2633]: I0911 23:55:34.447651 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.447632133 podStartE2EDuration="1.447632133s" podCreationTimestamp="2025-09-11 23:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:55:34.412891406 +0000 UTC m=+1.142229101" watchObservedRunningTime="2025-09-11 23:55:34.447632133 +0000 UTC m=+1.176969788" Sep 11 23:55:34.448099 kubelet[2633]: I0911 23:55:34.447798 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.447793182 podStartE2EDuration="1.447793182s" podCreationTimestamp="2025-09-11 23:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:55:34.447731584 +0000 UTC m=+1.177069239" watchObservedRunningTime="2025-09-11 23:55:34.447793182 +0000 UTC m=+1.177130837" Sep 11 23:55:34.531411 kubelet[2633]: I0911 23:55:34.530711 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.530693877 podStartE2EDuration="1.530693877s" podCreationTimestamp="2025-09-11 23:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:55:34.530592627 +0000 UTC m=+1.259930282" watchObservedRunningTime="2025-09-11 23:55:34.530693877 +0000 UTC m=+1.260031532" Sep 11 23:55:35.374598 kubelet[2633]: E0911 23:55:35.374557 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:35.374990 kubelet[2633]: E0911 23:55:35.374822 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:37.409151 kubelet[2633]: E0911 23:55:37.409060 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:38.673351 kubelet[2633]: E0911 23:55:38.673309 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:39.381645 kubelet[2633]: E0911 23:55:39.380973 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:39.651248 kubelet[2633]: I0911 23:55:39.651152 2633 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 23:55:39.651499 containerd[1489]: time="2025-09-11T23:55:39.651465622Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 23:55:39.651784 kubelet[2633]: I0911 23:55:39.651633 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 23:55:40.634441 systemd[1]: Created slice kubepods-besteffort-pod9a5f2b29_f102_43e8_b854_a9220344fdc8.slice - libcontainer container kubepods-besteffort-pod9a5f2b29_f102_43e8_b854_a9220344fdc8.slice. Sep 11 23:55:40.689517 kubelet[2633]: I0911 23:55:40.689454 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a5f2b29-f102-43e8-b854-a9220344fdc8-kube-proxy\") pod \"kube-proxy-dxv49\" (UID: \"9a5f2b29-f102-43e8-b854-a9220344fdc8\") " pod="kube-system/kube-proxy-dxv49" Sep 11 23:55:40.690087 kubelet[2633]: I0911 23:55:40.689532 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a5f2b29-f102-43e8-b854-a9220344fdc8-xtables-lock\") pod \"kube-proxy-dxv49\" (UID: \"9a5f2b29-f102-43e8-b854-a9220344fdc8\") " pod="kube-system/kube-proxy-dxv49" Sep 11 23:55:40.690087 kubelet[2633]: I0911 23:55:40.689558 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a5f2b29-f102-43e8-b854-a9220344fdc8-lib-modules\") pod \"kube-proxy-dxv49\" (UID: \"9a5f2b29-f102-43e8-b854-a9220344fdc8\") " pod="kube-system/kube-proxy-dxv49" Sep 11 23:55:40.690087 kubelet[2633]: I0911 23:55:40.689576 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qcn\" (UniqueName: \"kubernetes.io/projected/9a5f2b29-f102-43e8-b854-a9220344fdc8-kube-api-access-f6qcn\") pod \"kube-proxy-dxv49\" (UID: \"9a5f2b29-f102-43e8-b854-a9220344fdc8\") " pod="kube-system/kube-proxy-dxv49" Sep 11 23:55:40.726533 systemd[1]: Created slice kubepods-besteffort-podd094bc5a_e838_473e_809a_1a2ffbf05162.slice - libcontainer container kubepods-besteffort-podd094bc5a_e838_473e_809a_1a2ffbf05162.slice. Sep 11 23:55:40.790694 kubelet[2633]: I0911 23:55:40.790647 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhds6\" (UniqueName: \"kubernetes.io/projected/d094bc5a-e838-473e-809a-1a2ffbf05162-kube-api-access-jhds6\") pod \"tigera-operator-58fc44c59b-89sbs\" (UID: \"d094bc5a-e838-473e-809a-1a2ffbf05162\") " pod="tigera-operator/tigera-operator-58fc44c59b-89sbs" Sep 11 23:55:40.790694 kubelet[2633]: I0911 23:55:40.790690 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d094bc5a-e838-473e-809a-1a2ffbf05162-var-lib-calico\") pod \"tigera-operator-58fc44c59b-89sbs\" (UID: \"d094bc5a-e838-473e-809a-1a2ffbf05162\") " pod="tigera-operator/tigera-operator-58fc44c59b-89sbs" Sep 11 23:55:40.946414 kubelet[2633]: E0911 23:55:40.946296 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:40.947185 containerd[1489]: time="2025-09-11T23:55:40.946977127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxv49,Uid:9a5f2b29-f102-43e8-b854-a9220344fdc8,Namespace:kube-system,Attempt:0,}" Sep 11 23:55:40.962768 containerd[1489]: time="2025-09-11T23:55:40.962709432Z" level=info msg="connecting to shim ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e" address="unix:///run/containerd/s/7b64001a6f00c2a2c1e3db6769def74c43b263c6c8f2ef2b350f48d929917b4d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:40.982906 systemd[1]: Started cri-containerd-ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e.scope - libcontainer container ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e. Sep 11 23:55:41.003579 containerd[1489]: time="2025-09-11T23:55:41.003524892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxv49,Uid:9a5f2b29-f102-43e8-b854-a9220344fdc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e\"" Sep 11 23:55:41.004208 kubelet[2633]: E0911 23:55:41.004186 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:41.006497 containerd[1489]: time="2025-09-11T23:55:41.006452886Z" level=info msg="CreateContainer within sandbox \"ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 23:55:41.016756 containerd[1489]: time="2025-09-11T23:55:41.016488924Z" level=info msg="Container 77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:41.022915 containerd[1489]: time="2025-09-11T23:55:41.022889491Z" level=info msg="CreateContainer within sandbox \"ddcd2d9745a2eff4f9a6dfc97a9f4e847224022642ad91df81c0677202d4e75e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01\"" Sep 11 23:55:41.023456 containerd[1489]: time="2025-09-11T23:55:41.023433989Z" level=info msg="StartContainer for \"77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01\"" Sep 11 23:55:41.024974 containerd[1489]: time="2025-09-11T23:55:41.024945912Z" level=info msg="connecting to shim 77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01" address="unix:///run/containerd/s/7b64001a6f00c2a2c1e3db6769def74c43b263c6c8f2ef2b350f48d929917b4d" protocol=ttrpc version=3 Sep 11 23:55:41.029776 containerd[1489]: time="2025-09-11T23:55:41.029733666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-89sbs,Uid:d094bc5a-e838-473e-809a-1a2ffbf05162,Namespace:tigera-operator,Attempt:0,}" Sep 11 23:55:41.044224 containerd[1489]: time="2025-09-11T23:55:41.044187457Z" level=info msg="connecting to shim 338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c" address="unix:///run/containerd/s/552efb9da5d7973de83dfa36f43b7aa2fd61a1b60633064915552ef9b3472b03" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:41.044911 systemd[1]: Started cri-containerd-77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01.scope - libcontainer container 77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01. Sep 11 23:55:41.070945 systemd[1]: Started cri-containerd-338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c.scope - libcontainer container 338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c. Sep 11 23:55:41.095595 containerd[1489]: time="2025-09-11T23:55:41.095385074Z" level=info msg="StartContainer for \"77d45ddc53c641382ff39d926baf0f2ba0d75da6f661c5f83d8efd81cb977b01\" returns successfully" Sep 11 23:55:41.107746 containerd[1489]: time="2025-09-11T23:55:41.107697836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-89sbs,Uid:d094bc5a-e838-473e-809a-1a2ffbf05162,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c\"" Sep 11 23:55:41.109822 containerd[1489]: time="2025-09-11T23:55:41.109793221Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 23:55:41.384789 kubelet[2633]: E0911 23:55:41.384688 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:41.808427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980828654.mount: Deactivated successfully. Sep 11 23:55:42.680991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380241449.mount: Deactivated successfully. Sep 11 23:55:43.327804 containerd[1489]: time="2025-09-11T23:55:43.327619056Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:43.328319 containerd[1489]: time="2025-09-11T23:55:43.328285960Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 11 23:55:43.329098 containerd[1489]: time="2025-09-11T23:55:43.329042873Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:43.331244 containerd[1489]: time="2025-09-11T23:55:43.331201481Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:43.332017 containerd[1489]: time="2025-09-11T23:55:43.331714330Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.221884106s" Sep 11 23:55:43.332017 containerd[1489]: time="2025-09-11T23:55:43.331766135Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 11 23:55:43.334558 containerd[1489]: time="2025-09-11T23:55:43.334534122Z" level=info msg="CreateContainer within sandbox \"338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 23:55:43.340314 containerd[1489]: time="2025-09-11T23:55:43.340276475Z" level=info msg="Container 8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:43.347144 containerd[1489]: time="2025-09-11T23:55:43.347049927Z" level=info msg="CreateContainer within sandbox \"338d8040dea98d36e2e482f9ff352928b9056ebdcb298640333de38beb17998c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192\"" Sep 11 23:55:43.347425 containerd[1489]: time="2025-09-11T23:55:43.347405521Z" level=info msg="StartContainer for \"8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192\"" Sep 11 23:55:43.348492 containerd[1489]: time="2025-09-11T23:55:43.348436500Z" level=info msg="connecting to shim 8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192" address="unix:///run/containerd/s/552efb9da5d7973de83dfa36f43b7aa2fd61a1b60633064915552ef9b3472b03" protocol=ttrpc version=3 Sep 11 23:55:43.371914 systemd[1]: Started cri-containerd-8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192.scope - libcontainer container 8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192. Sep 11 23:55:43.396243 containerd[1489]: time="2025-09-11T23:55:43.396180097Z" level=info msg="StartContainer for \"8c65f922251181be8423db74d4707945b60d7f6d9f72b14a6fc38f839d6de192\" returns successfully" Sep 11 23:55:44.403261 kubelet[2633]: I0911 23:55:44.401461 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxv49" podStartSLOduration=4.401442979 podStartE2EDuration="4.401442979s" podCreationTimestamp="2025-09-11 23:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:55:41.396816435 +0000 UTC m=+8.126154090" watchObservedRunningTime="2025-09-11 23:55:44.401442979 +0000 UTC m=+11.130780634" Sep 11 23:55:44.403261 kubelet[2633]: I0911 23:55:44.401578 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-89sbs" podStartSLOduration=2.178261019 podStartE2EDuration="4.401572031s" podCreationTimestamp="2025-09-11 23:55:40 +0000 UTC" firstStartedPulling="2025-09-11 23:55:41.109021018 +0000 UTC m=+7.838358673" lastFinishedPulling="2025-09-11 23:55:43.33233203 +0000 UTC m=+10.061669685" observedRunningTime="2025-09-11 23:55:44.400915891 +0000 UTC m=+11.130253546" watchObservedRunningTime="2025-09-11 23:55:44.401572031 +0000 UTC m=+11.130909686" Sep 11 23:55:44.577048 kubelet[2633]: E0911 23:55:44.576993 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:47.417221 kubelet[2633]: E0911 23:55:47.417179 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:48.552018 update_engine[1477]: I20250911 23:55:48.551940 1477 update_attempter.cc:509] Updating boot flags... Sep 11 23:55:48.847053 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 11 23:55:48.850138 sshd[1707]: Connection closed by 10.0.0.1 port 45444 Sep 11 23:55:48.850566 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Sep 11 23:55:48.854971 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:45444.service: Deactivated successfully. Sep 11 23:55:48.858916 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 23:55:48.861865 systemd[1]: session-7.scope: Consumed 7.207s CPU time, 217.6M memory peak. Sep 11 23:55:48.863233 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Sep 11 23:55:48.865870 systemd-logind[1471]: Removed session 7. Sep 11 23:55:53.978692 systemd[1]: Created slice kubepods-besteffort-pod8a4b5466_b0ba_44b0_aca8_e8c3cf8bf7ee.slice - libcontainer container kubepods-besteffort-pod8a4b5466_b0ba_44b0_aca8_e8c3cf8bf7ee.slice. Sep 11 23:55:53.988665 kubelet[2633]: I0911 23:55:53.988625 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjvg\" (UniqueName: \"kubernetes.io/projected/8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee-kube-api-access-5rjvg\") pod \"calico-typha-7c4d596669-w2qhv\" (UID: \"8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee\") " pod="calico-system/calico-typha-7c4d596669-w2qhv" Sep 11 23:55:53.988665 kubelet[2633]: I0911 23:55:53.988669 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee-tigera-ca-bundle\") pod \"calico-typha-7c4d596669-w2qhv\" (UID: \"8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee\") " pod="calico-system/calico-typha-7c4d596669-w2qhv" Sep 11 23:55:53.989037 kubelet[2633]: I0911 23:55:53.988686 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee-typha-certs\") pod \"calico-typha-7c4d596669-w2qhv\" (UID: \"8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee\") " pod="calico-system/calico-typha-7c4d596669-w2qhv" Sep 11 23:55:54.266610 systemd[1]: Created slice kubepods-besteffort-pod0ef1deb6_0c07_436a_bec4_3666f2b363d7.slice - libcontainer container kubepods-besteffort-pod0ef1deb6_0c07_436a_bec4_3666f2b363d7.slice. Sep 11 23:55:54.288710 kubelet[2633]: E0911 23:55:54.288416 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:54.289503 containerd[1489]: time="2025-09-11T23:55:54.289447498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4d596669-w2qhv,Uid:8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee,Namespace:calico-system,Attempt:0,}" Sep 11 23:55:54.290601 kubelet[2633]: I0911 23:55:54.290543 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-cni-net-dir\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290601 kubelet[2633]: I0911 23:55:54.290583 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ef1deb6-0c07-436a-bec4-3666f2b363d7-node-certs\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290601 kubelet[2633]: I0911 23:55:54.290602 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-policysync\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290864 kubelet[2633]: I0911 23:55:54.290620 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-var-lib-calico\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290864 kubelet[2633]: I0911 23:55:54.290636 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-cni-bin-dir\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290864 kubelet[2633]: I0911 23:55:54.290651 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-var-run-calico\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290864 kubelet[2633]: I0911 23:55:54.290665 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-xtables-lock\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.290864 kubelet[2633]: I0911 23:55:54.290684 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-cni-log-dir\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.291128 kubelet[2633]: I0911 23:55:54.290724 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-flexvol-driver-host\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.291128 kubelet[2633]: I0911 23:55:54.291006 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ef1deb6-0c07-436a-bec4-3666f2b363d7-lib-modules\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.291128 kubelet[2633]: I0911 23:55:54.291034 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9572\" (UniqueName: \"kubernetes.io/projected/0ef1deb6-0c07-436a-bec4-3666f2b363d7-kube-api-access-q9572\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.291128 kubelet[2633]: I0911 23:55:54.291088 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ef1deb6-0c07-436a-bec4-3666f2b363d7-tigera-ca-bundle\") pod \"calico-node-979xz\" (UID: \"0ef1deb6-0c07-436a-bec4-3666f2b363d7\") " pod="calico-system/calico-node-979xz" Sep 11 23:55:54.325932 containerd[1489]: time="2025-09-11T23:55:54.325891316Z" level=info msg="connecting to shim ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13" address="unix:///run/containerd/s/5192af48a9677155d8f163595349ac9a134a89d53114e34bf79c29c1578eff3b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:54.394317 kubelet[2633]: E0911 23:55:54.394213 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.394317 kubelet[2633]: W0911 23:55:54.394251 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.394317 kubelet[2633]: E0911 23:55:54.394277 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.394808 kubelet[2633]: E0911 23:55:54.394798 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.394851 kubelet[2633]: W0911 23:55:54.394810 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.394851 kubelet[2633]: E0911 23:55:54.394841 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.396008 kubelet[2633]: E0911 23:55:54.395982 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.396008 kubelet[2633]: W0911 23:55:54.395997 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.396103 kubelet[2633]: E0911 23:55:54.396014 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.403635 kubelet[2633]: E0911 23:55:54.402804 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.403635 kubelet[2633]: W0911 23:55:54.402831 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.403635 kubelet[2633]: E0911 23:55:54.402942 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.403635 kubelet[2633]: E0911 23:55:54.403054 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.403635 kubelet[2633]: W0911 23:55:54.403063 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.403635 kubelet[2633]: E0911 23:55:54.403201 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.403851 kubelet[2633]: E0911 23:55:54.403831 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.403877 kubelet[2633]: W0911 23:55:54.403853 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.403877 kubelet[2633]: E0911 23:55:54.403867 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.404848 kubelet[2633]: E0911 23:55:54.404820 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.404848 kubelet[2633]: W0911 23:55:54.404839 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.404848 kubelet[2633]: E0911 23:55:54.404852 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.415857 kubelet[2633]: E0911 23:55:54.415826 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.415857 kubelet[2633]: W0911 23:55:54.415847 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.416007 kubelet[2633]: E0911 23:55:54.415866 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.450964 systemd[1]: Started cri-containerd-ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13.scope - libcontainer container ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13. Sep 11 23:55:54.502968 containerd[1489]: time="2025-09-11T23:55:54.502923278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4d596669-w2qhv,Uid:8a4b5466-b0ba-44b0-aca8-e8c3cf8bf7ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13\"" Sep 11 23:55:54.503688 kubelet[2633]: E0911 23:55:54.503665 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:54.506466 containerd[1489]: time="2025-09-11T23:55:54.506420872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 23:55:54.559983 kubelet[2633]: E0911 23:55:54.557829 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtjz" podUID="bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8" Sep 11 23:55:54.571469 containerd[1489]: time="2025-09-11T23:55:54.571427551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-979xz,Uid:0ef1deb6-0c07-436a-bec4-3666f2b363d7,Namespace:calico-system,Attempt:0,}" Sep 11 23:55:54.584439 kubelet[2633]: E0911 23:55:54.584397 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.584439 kubelet[2633]: W0911 23:55:54.584443 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.584600 kubelet[2633]: E0911 23:55:54.584466 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.584811 kubelet[2633]: E0911 23:55:54.584798 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.584839 kubelet[2633]: W0911 23:55:54.584811 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.584839 kubelet[2633]: E0911 23:55:54.584821 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.584984 kubelet[2633]: E0911 23:55:54.584973 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585017 kubelet[2633]: W0911 23:55:54.584983 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585017 kubelet[2633]: E0911 23:55:54.584991 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.585120 kubelet[2633]: E0911 23:55:54.585110 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585140 kubelet[2633]: W0911 23:55:54.585120 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585140 kubelet[2633]: E0911 23:55:54.585127 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.585269 kubelet[2633]: E0911 23:55:54.585259 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585294 kubelet[2633]: W0911 23:55:54.585271 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585294 kubelet[2633]: E0911 23:55:54.585279 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.585396 kubelet[2633]: E0911 23:55:54.585388 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585416 kubelet[2633]: W0911 23:55:54.585398 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585416 kubelet[2633]: E0911 23:55:54.585406 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.585559 kubelet[2633]: E0911 23:55:54.585549 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585559 kubelet[2633]: W0911 23:55:54.585558 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585606 kubelet[2633]: E0911 23:55:54.585566 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.585731 kubelet[2633]: E0911 23:55:54.585718 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.585767 kubelet[2633]: W0911 23:55:54.585731 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.585767 kubelet[2633]: E0911 23:55:54.585763 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.585922 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587296 kubelet[2633]: W0911 23:55:54.585935 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.585944 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.586087 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587296 kubelet[2633]: W0911 23:55:54.586093 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.586101 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.586228 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587296 kubelet[2633]: W0911 23:55:54.586234 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.586242 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587296 kubelet[2633]: E0911 23:55:54.586370 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587575 kubelet[2633]: W0911 23:55:54.586377 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586384 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586510 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587575 kubelet[2633]: W0911 23:55:54.586517 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586539 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586660 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587575 kubelet[2633]: W0911 23:55:54.586667 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586674 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587575 kubelet[2633]: E0911 23:55:54.586833 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587575 kubelet[2633]: W0911 23:55:54.586839 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.586846 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.586973 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587782 kubelet[2633]: W0911 23:55:54.586980 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.586986 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.587110 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587782 kubelet[2633]: W0911 23:55:54.587117 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.587123 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.587239 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.587782 kubelet[2633]: W0911 23:55:54.587246 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.587782 kubelet[2633]: E0911 23:55:54.587253 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.588089 kubelet[2633]: E0911 23:55:54.588070 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.588089 kubelet[2633]: W0911 23:55:54.588082 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.588089 kubelet[2633]: E0911 23:55:54.588090 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.588256 kubelet[2633]: E0911 23:55:54.588227 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.588256 kubelet[2633]: W0911 23:55:54.588249 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.588310 kubelet[2633]: E0911 23:55:54.588260 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.592910 containerd[1489]: time="2025-09-11T23:55:54.592872618Z" level=info msg="connecting to shim 0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b" address="unix:///run/containerd/s/86e7ec49bba57509ffe96f69f4c7e880a4443912aa8f468c11f97370f6e6e3c0" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:55:54.594798 kubelet[2633]: E0911 23:55:54.594774 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.594798 kubelet[2633]: W0911 23:55:54.594791 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.594798 kubelet[2633]: E0911 23:55:54.594805 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.594910 kubelet[2633]: I0911 23:55:54.594832 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8-registration-dir\") pod \"csi-node-driver-8rtjz\" (UID: \"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8\") " pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:55:54.595063 kubelet[2633]: E0911 23:55:54.595050 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.595099 kubelet[2633]: W0911 23:55:54.595063 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.595099 kubelet[2633]: E0911 23:55:54.595079 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.595099 kubelet[2633]: I0911 23:55:54.595096 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8-varrun\") pod \"csi-node-driver-8rtjz\" (UID: \"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8\") " pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:55:54.595378 kubelet[2633]: E0911 23:55:54.595363 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.595378 kubelet[2633]: W0911 23:55:54.595376 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.595453 kubelet[2633]: E0911 23:55:54.595391 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.595453 kubelet[2633]: I0911 23:55:54.595410 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8-kubelet-dir\") pod \"csi-node-driver-8rtjz\" (UID: \"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8\") " pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:55:54.595611 kubelet[2633]: E0911 23:55:54.595582 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.595611 kubelet[2633]: W0911 23:55:54.595592 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.595611 kubelet[2633]: E0911 23:55:54.595601 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.595702 kubelet[2633]: I0911 23:55:54.595618 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8-socket-dir\") pod \"csi-node-driver-8rtjz\" (UID: \"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8\") " pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:55:54.595861 kubelet[2633]: E0911 23:55:54.595823 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.595861 kubelet[2633]: W0911 23:55:54.595832 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.595861 kubelet[2633]: E0911 23:55:54.595845 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.595861 kubelet[2633]: I0911 23:55:54.595861 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjnn\" (UniqueName: \"kubernetes.io/projected/bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8-kube-api-access-qvjnn\") pod \"csi-node-driver-8rtjz\" (UID: \"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8\") " pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:55:54.596056 kubelet[2633]: E0911 23:55:54.596037 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.596056 kubelet[2633]: W0911 23:55:54.596055 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.596108 kubelet[2633]: E0911 23:55:54.596074 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.596239 kubelet[2633]: E0911 23:55:54.596227 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.596268 kubelet[2633]: W0911 23:55:54.596239 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.596268 kubelet[2633]: E0911 23:55:54.596257 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.596432 kubelet[2633]: E0911 23:55:54.596419 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.596457 kubelet[2633]: W0911 23:55:54.596432 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.596457 kubelet[2633]: E0911 23:55:54.596446 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.596614 kubelet[2633]: E0911 23:55:54.596601 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.596614 kubelet[2633]: W0911 23:55:54.596613 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.596657 kubelet[2633]: E0911 23:55:54.596627 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.597388 kubelet[2633]: E0911 23:55:54.597372 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.597415 kubelet[2633]: W0911 23:55:54.597388 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.597415 kubelet[2633]: E0911 23:55:54.597405 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.597590 kubelet[2633]: E0911 23:55:54.597577 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.597590 kubelet[2633]: W0911 23:55:54.597590 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.597639 kubelet[2633]: E0911 23:55:54.597617 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.597769 kubelet[2633]: E0911 23:55:54.597756 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.597795 kubelet[2633]: W0911 23:55:54.597768 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.597827 kubelet[2633]: E0911 23:55:54.597800 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.597979 kubelet[2633]: E0911 23:55:54.597964 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.597979 kubelet[2633]: W0911 23:55:54.597976 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.598046 kubelet[2633]: E0911 23:55:54.597988 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.598170 kubelet[2633]: E0911 23:55:54.598155 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.598170 kubelet[2633]: W0911 23:55:54.598168 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.598218 kubelet[2633]: E0911 23:55:54.598178 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.598343 kubelet[2633]: E0911 23:55:54.598327 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.598343 kubelet[2633]: W0911 23:55:54.598338 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.598395 kubelet[2633]: E0911 23:55:54.598347 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.614925 systemd[1]: Started cri-containerd-0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b.scope - libcontainer container 0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b. Sep 11 23:55:54.646693 containerd[1489]: time="2025-09-11T23:55:54.646643516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-979xz,Uid:0ef1deb6-0c07-436a-bec4-3666f2b363d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\"" Sep 11 23:55:54.697272 kubelet[2633]: E0911 23:55:54.697155 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.697272 kubelet[2633]: W0911 23:55:54.697182 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.697429 kubelet[2633]: E0911 23:55:54.697341 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.698225 kubelet[2633]: E0911 23:55:54.698150 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.698225 kubelet[2633]: W0911 23:55:54.698167 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.699122 kubelet[2633]: E0911 23:55:54.698768 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700172 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701395 kubelet[2633]: W0911 23:55:54.700202 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700219 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700390 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701395 kubelet[2633]: W0911 23:55:54.700399 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700469 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700548 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701395 kubelet[2633]: W0911 23:55:54.700557 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700677 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701395 kubelet[2633]: W0911 23:55:54.700683 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701395 kubelet[2633]: E0911 23:55:54.700819 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701673 kubelet[2633]: W0911 23:55:54.700826 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701673 kubelet[2633]: E0911 23:55:54.700835 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701673 kubelet[2633]: E0911 23:55:54.700852 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701673 kubelet[2633]: E0911 23:55:54.700888 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.701673 kubelet[2633]: E0911 23:55:54.701616 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.701673 kubelet[2633]: W0911 23:55:54.701631 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.701673 kubelet[2633]: E0911 23:55:54.701674 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.702861 kubelet[2633]: E0911 23:55:54.702073 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.702861 kubelet[2633]: W0911 23:55:54.702085 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.702861 kubelet[2633]: E0911 23:55:54.702097 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.703860 kubelet[2633]: E0911 23:55:54.703731 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.704203 kubelet[2633]: W0911 23:55:54.704038 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.704203 kubelet[2633]: E0911 23:55:54.704069 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.704929 kubelet[2633]: E0911 23:55:54.704572 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.704929 kubelet[2633]: W0911 23:55:54.704588 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.704929 kubelet[2633]: E0911 23:55:54.704622 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.705190 kubelet[2633]: E0911 23:55:54.705158 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.705370 kubelet[2633]: W0911 23:55:54.705352 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.705713 kubelet[2633]: E0911 23:55:54.705607 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.707178 kubelet[2633]: E0911 23:55:54.707073 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.707569 kubelet[2633]: W0911 23:55:54.707329 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.707569 kubelet[2633]: E0911 23:55:54.707375 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.708369 kubelet[2633]: E0911 23:55:54.707869 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.708369 kubelet[2633]: W0911 23:55:54.707885 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.708369 kubelet[2633]: E0911 23:55:54.707912 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.709015 kubelet[2633]: E0911 23:55:54.708826 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.709606 kubelet[2633]: W0911 23:55:54.709254 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.709720 kubelet[2633]: E0911 23:55:54.709697 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.710362 kubelet[2633]: E0911 23:55:54.710340 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.710764 kubelet[2633]: W0911 23:55:54.710514 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.711539 kubelet[2633]: E0911 23:55:54.711138 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.711539 kubelet[2633]: W0911 23:55:54.711154 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.711539 kubelet[2633]: E0911 23:55:54.711258 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.711539 kubelet[2633]: E0911 23:55:54.711280 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.712469 kubelet[2633]: E0911 23:55:54.712186 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.713143 kubelet[2633]: W0911 23:55:54.712568 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.713252 kubelet[2633]: E0911 23:55:54.713234 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.713458 kubelet[2633]: E0911 23:55:54.713429 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.713458 kubelet[2633]: W0911 23:55:54.713446 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.713458 kubelet[2633]: E0911 23:55:54.713459 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.713695 kubelet[2633]: E0911 23:55:54.713673 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.713695 kubelet[2633]: W0911 23:55:54.713683 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.713780 kubelet[2633]: E0911 23:55:54.713699 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.714226 kubelet[2633]: E0911 23:55:54.713856 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.714226 kubelet[2633]: W0911 23:55:54.713867 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.714226 kubelet[2633]: E0911 23:55:54.713879 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.714226 kubelet[2633]: E0911 23:55:54.714007 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.714226 kubelet[2633]: W0911 23:55:54.714014 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.714226 kubelet[2633]: E0911 23:55:54.714027 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.714226 kubelet[2633]: E0911 23:55:54.714192 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.714226 kubelet[2633]: W0911 23:55:54.714199 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.714410 kubelet[2633]: E0911 23:55:54.714228 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.714433 kubelet[2633]: E0911 23:55:54.714409 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.714433 kubelet[2633]: W0911 23:55:54.714420 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.714433 kubelet[2633]: E0911 23:55:54.714429 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.714705 kubelet[2633]: E0911 23:55:54.714690 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.714705 kubelet[2633]: W0911 23:55:54.714704 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.714776 kubelet[2633]: E0911 23:55:54.714715 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:54.715008 kubelet[2633]: E0911 23:55:54.714994 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:54.715083 kubelet[2633]: W0911 23:55:54.715007 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:54.715083 kubelet[2633]: E0911 23:55:54.715016 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:55.865865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367474002.mount: Deactivated successfully. Sep 11 23:55:56.198545 containerd[1489]: time="2025-09-11T23:55:56.198432939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:56.199323 containerd[1489]: time="2025-09-11T23:55:56.199294383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 11 23:55:56.200042 containerd[1489]: time="2025-09-11T23:55:56.200015619Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:56.201589 containerd[1489]: time="2025-09-11T23:55:56.201554657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:56.202289 containerd[1489]: time="2025-09-11T23:55:56.202261613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.695802299s" Sep 11 23:55:56.202339 containerd[1489]: time="2025-09-11T23:55:56.202293134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 11 23:55:56.204500 containerd[1489]: time="2025-09-11T23:55:56.203954138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 23:55:56.219052 containerd[1489]: time="2025-09-11T23:55:56.219027341Z" level=info msg="CreateContainer within sandbox \"ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 23:55:56.226650 containerd[1489]: time="2025-09-11T23:55:56.225888608Z" level=info msg="Container 578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:56.232750 containerd[1489]: time="2025-09-11T23:55:56.232694992Z" level=info msg="CreateContainer within sandbox \"ef31be512ac3172aa3decf96a840c9d906b72c06c12f09afaffe248b01dafb13\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9\"" Sep 11 23:55:56.233337 containerd[1489]: time="2025-09-11T23:55:56.233300663Z" level=info msg="StartContainer for \"578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9\"" Sep 11 23:55:56.234528 containerd[1489]: time="2025-09-11T23:55:56.234439920Z" level=info msg="connecting to shim 578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9" address="unix:///run/containerd/s/5192af48a9677155d8f163595349ac9a134a89d53114e34bf79c29c1578eff3b" protocol=ttrpc version=3 Sep 11 23:55:56.256920 systemd[1]: Started cri-containerd-578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9.scope - libcontainer container 578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9. Sep 11 23:55:56.291218 containerd[1489]: time="2025-09-11T23:55:56.291182830Z" level=info msg="StartContainer for \"578c281806ee3959350169e1914774fa31be89b7df13538bbace32998464c6a9\" returns successfully" Sep 11 23:55:56.359628 kubelet[2633]: E0911 23:55:56.359580 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtjz" podUID="bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8" Sep 11 23:55:56.422659 kubelet[2633]: E0911 23:55:56.422559 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:56.501645 kubelet[2633]: E0911 23:55:56.501531 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.501645 kubelet[2633]: W0911 23:55:56.501555 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.501849 kubelet[2633]: E0911 23:55:56.501671 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.501993 kubelet[2633]: E0911 23:55:56.501973 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.501993 kubelet[2633]: W0911 23:55:56.501986 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.502055 kubelet[2633]: E0911 23:55:56.501997 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.503841 kubelet[2633]: E0911 23:55:56.503813 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.503841 kubelet[2633]: W0911 23:55:56.503832 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.503841 kubelet[2633]: E0911 23:55:56.503844 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504048 kubelet[2633]: E0911 23:55:56.504031 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504048 kubelet[2633]: W0911 23:55:56.504042 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504099 kubelet[2633]: E0911 23:55:56.504051 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504211 kubelet[2633]: E0911 23:55:56.504197 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504244 kubelet[2633]: W0911 23:55:56.504212 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504244 kubelet[2633]: E0911 23:55:56.504221 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504366 kubelet[2633]: E0911 23:55:56.504351 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504366 kubelet[2633]: W0911 23:55:56.504362 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504421 kubelet[2633]: E0911 23:55:56.504370 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504515 kubelet[2633]: E0911 23:55:56.504500 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504515 kubelet[2633]: W0911 23:55:56.504510 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504591 kubelet[2633]: E0911 23:55:56.504517 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504676 kubelet[2633]: E0911 23:55:56.504662 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504676 kubelet[2633]: W0911 23:55:56.504674 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504728 kubelet[2633]: E0911 23:55:56.504682 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.504867 kubelet[2633]: E0911 23:55:56.504848 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.504867 kubelet[2633]: W0911 23:55:56.504859 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.504867 kubelet[2633]: E0911 23:55:56.504869 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.505012 kubelet[2633]: E0911 23:55:56.504998 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.505012 kubelet[2633]: W0911 23:55:56.505009 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.505076 kubelet[2633]: E0911 23:55:56.505017 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.505838 kubelet[2633]: E0911 23:55:56.505815 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.505838 kubelet[2633]: W0911 23:55:56.505830 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.505838 kubelet[2633]: E0911 23:55:56.505842 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.506126 kubelet[2633]: E0911 23:55:56.506107 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.506126 kubelet[2633]: W0911 23:55:56.506120 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.506194 kubelet[2633]: E0911 23:55:56.506157 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.506445 kubelet[2633]: E0911 23:55:56.506422 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.506445 kubelet[2633]: W0911 23:55:56.506434 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.506445 kubelet[2633]: E0911 23:55:56.506444 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.506794 kubelet[2633]: E0911 23:55:56.506773 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.506794 kubelet[2633]: W0911 23:55:56.506787 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.506794 kubelet[2633]: E0911 23:55:56.506798 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.507334 kubelet[2633]: E0911 23:55:56.507314 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.507334 kubelet[2633]: W0911 23:55:56.507330 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.507421 kubelet[2633]: E0911 23:55:56.507341 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.521131 kubelet[2633]: E0911 23:55:56.520932 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.521131 kubelet[2633]: W0911 23:55:56.520954 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.521131 kubelet[2633]: E0911 23:55:56.520974 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.521305 kubelet[2633]: E0911 23:55:56.521185 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.521305 kubelet[2633]: W0911 23:55:56.521194 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.521305 kubelet[2633]: E0911 23:55:56.521211 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.522567 kubelet[2633]: E0911 23:55:56.522530 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.522567 kubelet[2633]: W0911 23:55:56.522546 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.522567 kubelet[2633]: E0911 23:55:56.522563 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.523937 kubelet[2633]: E0911 23:55:56.523911 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.523937 kubelet[2633]: W0911 23:55:56.523926 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.523937 kubelet[2633]: E0911 23:55:56.523963 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.524114 kubelet[2633]: E0911 23:55:56.524101 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.524114 kubelet[2633]: W0911 23:55:56.524109 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.524480 kubelet[2633]: E0911 23:55:56.524160 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.524480 kubelet[2633]: E0911 23:55:56.524337 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.524480 kubelet[2633]: W0911 23:55:56.524347 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.524480 kubelet[2633]: E0911 23:55:56.524376 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.524768 kubelet[2633]: E0911 23:55:56.524750 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.524768 kubelet[2633]: W0911 23:55:56.524764 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.524768 kubelet[2633]: E0911 23:55:56.524779 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.525097 kubelet[2633]: E0911 23:55:56.525059 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.525097 kubelet[2633]: W0911 23:55:56.525068 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.525097 kubelet[2633]: E0911 23:55:56.525082 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.525407 kubelet[2633]: E0911 23:55:56.525366 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.525407 kubelet[2633]: W0911 23:55:56.525375 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.525797 kubelet[2633]: E0911 23:55:56.525768 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.526082 kubelet[2633]: E0911 23:55:56.526056 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.526082 kubelet[2633]: W0911 23:55:56.526068 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.526082 kubelet[2633]: E0911 23:55:56.526080 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.526282 kubelet[2633]: E0911 23:55:56.526264 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.526282 kubelet[2633]: W0911 23:55:56.526276 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.526679 kubelet[2633]: E0911 23:55:56.526611 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.526908 kubelet[2633]: E0911 23:55:56.526885 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.526908 kubelet[2633]: W0911 23:55:56.526904 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.527511 kubelet[2633]: E0911 23:55:56.527479 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.527708 kubelet[2633]: E0911 23:55:56.527692 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.527708 kubelet[2633]: W0911 23:55:56.527705 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.527963 kubelet[2633]: E0911 23:55:56.527942 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.528536 kubelet[2633]: E0911 23:55:56.528518 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.528536 kubelet[2633]: W0911 23:55:56.528531 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.528784 kubelet[2633]: E0911 23:55:56.528760 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.529317 kubelet[2633]: E0911 23:55:56.529293 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.529317 kubelet[2633]: W0911 23:55:56.529309 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.529861 kubelet[2633]: E0911 23:55:56.529540 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.530006 kubelet[2633]: E0911 23:55:56.529950 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.530006 kubelet[2633]: W0911 23:55:56.529965 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.530006 kubelet[2633]: E0911 23:55:56.529985 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.530778 kubelet[2633]: E0911 23:55:56.530535 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.530842 kubelet[2633]: W0911 23:55:56.530779 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.530842 kubelet[2633]: E0911 23:55:56.530797 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:56.532106 kubelet[2633]: E0911 23:55:56.532088 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:56.532106 kubelet[2633]: W0911 23:55:56.532102 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:56.532192 kubelet[2633]: E0911 23:55:56.532114 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.422584 kubelet[2633]: I0911 23:55:57.422520 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:55:57.424429 kubelet[2633]: E0911 23:55:57.424350 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:55:57.457123 containerd[1489]: time="2025-09-11T23:55:57.457059608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:57.457646 containerd[1489]: time="2025-09-11T23:55:57.457622235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 11 23:55:57.458509 containerd[1489]: time="2025-09-11T23:55:57.458456116Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:57.460488 containerd[1489]: time="2025-09-11T23:55:57.460457612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:55:57.461334 containerd[1489]: time="2025-09-11T23:55:57.461081523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.257096182s" Sep 11 23:55:57.461334 containerd[1489]: time="2025-09-11T23:55:57.461115884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 11 23:55:57.463148 containerd[1489]: time="2025-09-11T23:55:57.463109141Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 23:55:57.483191 containerd[1489]: time="2025-09-11T23:55:57.482110661Z" level=info msg="Container 97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:55:57.490750 containerd[1489]: time="2025-09-11T23:55:57.490702236Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\"" Sep 11 23:55:57.492627 containerd[1489]: time="2025-09-11T23:55:57.492602848Z" level=info msg="StartContainer for \"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\"" Sep 11 23:55:57.495308 containerd[1489]: time="2025-09-11T23:55:57.494822876Z" level=info msg="connecting to shim 97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2" address="unix:///run/containerd/s/86e7ec49bba57509ffe96f69f4c7e880a4443912aa8f468c11f97370f6e6e3c0" protocol=ttrpc version=3 Sep 11 23:55:57.514166 kubelet[2633]: E0911 23:55:57.514144 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.514295 kubelet[2633]: W0911 23:55:57.514280 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.514369 kubelet[2633]: E0911 23:55:57.514357 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.514612 kubelet[2633]: E0911 23:55:57.514599 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.514776 kubelet[2633]: W0911 23:55:57.514760 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.514841 kubelet[2633]: E0911 23:55:57.514829 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.515052 kubelet[2633]: E0911 23:55:57.515033 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.515129 kubelet[2633]: W0911 23:55:57.515117 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.515184 kubelet[2633]: E0911 23:55:57.515174 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.515441 kubelet[2633]: E0911 23:55:57.515427 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.515531 kubelet[2633]: W0911 23:55:57.515519 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.515599 kubelet[2633]: E0911 23:55:57.515577 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.515902 kubelet[2633]: E0911 23:55:57.515887 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.515980 kubelet[2633]: W0911 23:55:57.515966 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.516042 kubelet[2633]: E0911 23:55:57.516023 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.516351 kubelet[2633]: E0911 23:55:57.516248 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.516351 kubelet[2633]: W0911 23:55:57.516261 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.516351 kubelet[2633]: E0911 23:55:57.516276 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.516499 kubelet[2633]: E0911 23:55:57.516487 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.516574 kubelet[2633]: W0911 23:55:57.516562 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.516718 kubelet[2633]: E0911 23:55:57.516640 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.517039 kubelet[2633]: E0911 23:55:57.517025 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.517128 kubelet[2633]: W0911 23:55:57.517116 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.517177 kubelet[2633]: E0911 23:55:57.517168 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.517457 kubelet[2633]: E0911 23:55:57.517388 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.517457 kubelet[2633]: W0911 23:55:57.517399 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.517457 kubelet[2633]: E0911 23:55:57.517408 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.517670 kubelet[2633]: E0911 23:55:57.517651 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.517789 kubelet[2633]: W0911 23:55:57.517713 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.517789 kubelet[2633]: E0911 23:55:57.517726 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.518008 kubelet[2633]: E0911 23:55:57.517982 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.518008 kubelet[2633]: W0911 23:55:57.517993 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.518161 kubelet[2633]: E0911 23:55:57.518087 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.518320 kubelet[2633]: E0911 23:55:57.518308 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.518542 kubelet[2633]: W0911 23:55:57.518420 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.518542 kubelet[2633]: E0911 23:55:57.518437 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.518691 kubelet[2633]: E0911 23:55:57.518677 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.518818 kubelet[2633]: W0911 23:55:57.518788 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.518865 kubelet[2633]: E0911 23:55:57.518821 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.519034 kubelet[2633]: E0911 23:55:57.519020 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.519034 kubelet[2633]: W0911 23:55:57.519033 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.519099 kubelet[2633]: E0911 23:55:57.519042 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.519183 kubelet[2633]: E0911 23:55:57.519171 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.519183 kubelet[2633]: W0911 23:55:57.519182 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.519238 kubelet[2633]: E0911 23:55:57.519190 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.520909 systemd[1]: Started cri-containerd-97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2.scope - libcontainer container 97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2. Sep 11 23:55:57.529804 kubelet[2633]: E0911 23:55:57.529769 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.529804 kubelet[2633]: W0911 23:55:57.529785 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.529804 kubelet[2633]: E0911 23:55:57.529798 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.529992 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.530784 kubelet[2633]: W0911 23:55:57.530002 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530016 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530160 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.530784 kubelet[2633]: W0911 23:55:57.530167 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530182 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530354 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.530784 kubelet[2633]: W0911 23:55:57.530361 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530383 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.530784 kubelet[2633]: E0911 23:55:57.530501 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531021 kubelet[2633]: W0911 23:55:57.530508 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531021 kubelet[2633]: E0911 23:55:57.530520 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531021 kubelet[2633]: E0911 23:55:57.530642 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531021 kubelet[2633]: W0911 23:55:57.530649 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531021 kubelet[2633]: E0911 23:55:57.530660 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531021 kubelet[2633]: E0911 23:55:57.530866 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531021 kubelet[2633]: W0911 23:55:57.530874 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531021 kubelet[2633]: E0911 23:55:57.530887 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531164 kubelet[2633]: E0911 23:55:57.531066 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531164 kubelet[2633]: W0911 23:55:57.531078 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531164 kubelet[2633]: E0911 23:55:57.531095 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531240 kubelet[2633]: E0911 23:55:57.531223 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531240 kubelet[2633]: W0911 23:55:57.531233 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531284 kubelet[2633]: E0911 23:55:57.531256 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531372 kubelet[2633]: E0911 23:55:57.531356 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531372 kubelet[2633]: W0911 23:55:57.531365 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531434 kubelet[2633]: E0911 23:55:57.531385 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531498 kubelet[2633]: E0911 23:55:57.531483 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531498 kubelet[2633]: W0911 23:55:57.531495 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531551 kubelet[2633]: E0911 23:55:57.531510 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.531670 kubelet[2633]: E0911 23:55:57.531642 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.531670 kubelet[2633]: W0911 23:55:57.531652 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.531749 kubelet[2633]: E0911 23:55:57.531690 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.537756 kubelet[2633]: E0911 23:55:57.537377 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.537756 kubelet[2633]: W0911 23:55:57.537419 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.537756 kubelet[2633]: E0911 23:55:57.537446 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.537872 kubelet[2633]: E0911 23:55:57.537794 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.537872 kubelet[2633]: W0911 23:55:57.537810 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.537872 kubelet[2633]: E0911 23:55:57.537836 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.538082 kubelet[2633]: E0911 23:55:57.538057 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.538082 kubelet[2633]: W0911 23:55:57.538071 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.538142 kubelet[2633]: E0911 23:55:57.538092 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.538284 kubelet[2633]: E0911 23:55:57.538265 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.538284 kubelet[2633]: W0911 23:55:57.538276 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.538350 kubelet[2633]: E0911 23:55:57.538289 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.538517 kubelet[2633]: E0911 23:55:57.538495 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.538517 kubelet[2633]: W0911 23:55:57.538511 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.538575 kubelet[2633]: E0911 23:55:57.538525 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.538795 kubelet[2633]: E0911 23:55:57.538771 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:55:57.538795 kubelet[2633]: W0911 23:55:57.538788 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:55:57.538862 kubelet[2633]: E0911 23:55:57.538800 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:55:57.577173 systemd[1]: cri-containerd-97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2.scope: Deactivated successfully. Sep 11 23:55:57.593780 containerd[1489]: time="2025-09-11T23:55:57.593722903Z" level=info msg="StartContainer for \"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\" returns successfully" Sep 11 23:55:57.600528 containerd[1489]: time="2025-09-11T23:55:57.600481670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\" id:\"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\" pid:3350 exited_at:{seconds:1757634957 nanos:586779407}" Sep 11 23:55:57.600528 containerd[1489]: time="2025-09-11T23:55:57.600490631Z" level=info msg="received exit event container_id:\"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\" id:\"97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2\" pid:3350 exited_at:{seconds:1757634957 nanos:586779407}" Sep 11 23:55:57.642765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97c4cc4451f61e8908793d1f2f6a5f52b32c34bb0e95929cd70dd10c3fc613e2-rootfs.mount: Deactivated successfully. Sep 11 23:55:58.359775 kubelet[2633]: E0911 23:55:58.359674 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtjz" podUID="bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8" Sep 11 23:55:58.432750 containerd[1489]: time="2025-09-11T23:55:58.432688874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 23:55:58.451463 kubelet[2633]: I0911 23:55:58.451407 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c4d596669-w2qhv" podStartSLOduration=3.7535761389999998 podStartE2EDuration="5.451370981s" podCreationTimestamp="2025-09-11 23:55:53 +0000 UTC" firstStartedPulling="2025-09-11 23:55:54.505994568 +0000 UTC m=+21.235332223" lastFinishedPulling="2025-09-11 23:55:56.20378945 +0000 UTC m=+22.933127065" observedRunningTime="2025-09-11 23:55:56.440140885 +0000 UTC m=+23.169478500" watchObservedRunningTime="2025-09-11 23:55:58.451370981 +0000 UTC m=+25.180708636" Sep 11 23:56:00.359761 kubelet[2633]: E0911 23:56:00.359542 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtjz" podUID="bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8" Sep 11 23:56:00.805917 containerd[1489]: time="2025-09-11T23:56:00.805856705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:00.806680 containerd[1489]: time="2025-09-11T23:56:00.806636578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 11 23:56:00.807561 containerd[1489]: time="2025-09-11T23:56:00.807406411Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:00.809464 containerd[1489]: time="2025-09-11T23:56:00.809430817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:00.810384 containerd[1489]: time="2025-09-11T23:56:00.810338616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.377603219s" Sep 11 23:56:00.810384 containerd[1489]: time="2025-09-11T23:56:00.810377898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 11 23:56:00.812414 containerd[1489]: time="2025-09-11T23:56:00.812381943Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 23:56:00.823289 containerd[1489]: time="2025-09-11T23:56:00.822952074Z" level=info msg="Container 31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:00.832417 containerd[1489]: time="2025-09-11T23:56:00.832321274Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\"" Sep 11 23:56:00.833813 containerd[1489]: time="2025-09-11T23:56:00.833149589Z" level=info msg="StartContainer for \"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\"" Sep 11 23:56:00.835592 containerd[1489]: time="2025-09-11T23:56:00.835557092Z" level=info msg="connecting to shim 31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209" address="unix:///run/containerd/s/86e7ec49bba57509ffe96f69f4c7e880a4443912aa8f468c11f97370f6e6e3c0" protocol=ttrpc version=3 Sep 11 23:56:00.867967 systemd[1]: Started cri-containerd-31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209.scope - libcontainer container 31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209. Sep 11 23:56:00.972024 containerd[1489]: time="2025-09-11T23:56:00.971975432Z" level=info msg="StartContainer for \"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\" returns successfully" Sep 11 23:56:01.481912 systemd[1]: cri-containerd-31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209.scope: Deactivated successfully. Sep 11 23:56:01.482223 systemd[1]: cri-containerd-31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209.scope: Consumed 483ms CPU time, 176.6M memory peak, 2.1M read from disk, 165.8M written to disk. Sep 11 23:56:01.483076 containerd[1489]: time="2025-09-11T23:56:01.482897743Z" level=info msg="received exit event container_id:\"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\" id:\"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\" pid:3430 exited_at:{seconds:1757634961 nanos:482244036}" Sep 11 23:56:01.483329 containerd[1489]: time="2025-09-11T23:56:01.483290599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\" id:\"31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209\" pid:3430 exited_at:{seconds:1757634961 nanos:482244036}" Sep 11 23:56:01.500905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b8bd1a7a73db2d00540d91f90726be263e4b8e578a8beb53d43d18c67fd209-rootfs.mount: Deactivated successfully. Sep 11 23:56:01.531391 kubelet[2633]: I0911 23:56:01.531334 2633 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 23:56:01.581167 systemd[1]: Created slice kubepods-besteffort-pod4397b26b_df1d_4a49_b10f_5c027daf1414.slice - libcontainer container kubepods-besteffort-pod4397b26b_df1d_4a49_b10f_5c027daf1414.slice. Sep 11 23:56:01.591405 systemd[1]: Created slice kubepods-besteffort-podd0cb54ed_2781_4e3d_a425_996cae24df23.slice - libcontainer container kubepods-besteffort-podd0cb54ed_2781_4e3d_a425_996cae24df23.slice. Sep 11 23:56:01.599435 systemd[1]: Created slice kubepods-besteffort-pod1dbb72ea_13b1_45f4_b6ec_65f4f6286005.slice - libcontainer container kubepods-besteffort-pod1dbb72ea_13b1_45f4_b6ec_65f4f6286005.slice. Sep 11 23:56:01.610849 systemd[1]: Created slice kubepods-besteffort-pod50831420_98b3_4476_b1e2_f89e1839d381.slice - libcontainer container kubepods-besteffort-pod50831420_98b3_4476_b1e2_f89e1839d381.slice. Sep 11 23:56:01.615246 systemd[1]: Created slice kubepods-burstable-podee0f6e11_5e4a_4e87_8841_5a6300168397.slice - libcontainer container kubepods-burstable-podee0f6e11_5e4a_4e87_8841_5a6300168397.slice. Sep 11 23:56:01.623133 systemd[1]: Created slice kubepods-besteffort-pod3ab3c824_d32f_4621_a927_bceeffc6024c.slice - libcontainer container kubepods-besteffort-pod3ab3c824_d32f_4621_a927_bceeffc6024c.slice. Sep 11 23:56:01.626785 systemd[1]: Created slice kubepods-burstable-pod916c0844_7b62_4825_8855_5d497b77b311.slice - libcontainer container kubepods-burstable-pod916c0844_7b62_4825_8855_5d497b77b311.slice. Sep 11 23:56:01.660972 kubelet[2633]: I0911 23:56:01.660920 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsj95\" (UniqueName: \"kubernetes.io/projected/d0cb54ed-2781-4e3d-a425-996cae24df23-kube-api-access-jsj95\") pod \"goldmane-7988f88666-w6bcn\" (UID: \"d0cb54ed-2781-4e3d-a425-996cae24df23\") " pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:01.661096 kubelet[2633]: I0911 23:56:01.661010 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50831420-98b3-4476-b1e2-f89e1839d381-calico-apiserver-certs\") pod \"calico-apiserver-d96c64c58-m94tl\" (UID: \"50831420-98b3-4476-b1e2-f89e1839d381\") " pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" Sep 11 23:56:01.661096 kubelet[2633]: I0911 23:56:01.661064 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gkn\" (UniqueName: \"kubernetes.io/projected/1dbb72ea-13b1-45f4-b6ec-65f4f6286005-kube-api-access-66gkn\") pod \"calico-apiserver-d96c64c58-dqlfz\" (UID: \"1dbb72ea-13b1-45f4-b6ec-65f4f6286005\") " pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" Sep 11 23:56:01.661096 kubelet[2633]: I0911 23:56:01.661088 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/916c0844-7b62-4825-8855-5d497b77b311-config-volume\") pod \"coredns-7c65d6cfc9-x9prl\" (UID: \"916c0844-7b62-4825-8855-5d497b77b311\") " pod="kube-system/coredns-7c65d6cfc9-x9prl" Sep 11 23:56:01.661194 kubelet[2633]: I0911 23:56:01.661109 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn68\" (UniqueName: \"kubernetes.io/projected/4397b26b-df1d-4a49-b10f-5c027daf1414-kube-api-access-jmn68\") pod \"calico-kube-controllers-5dfccd6f76-6k2x9\" (UID: \"4397b26b-df1d-4a49-b10f-5c027daf1414\") " pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" Sep 11 23:56:01.661194 kubelet[2633]: I0911 23:56:01.661132 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvc6j\" (UniqueName: \"kubernetes.io/projected/ee0f6e11-5e4a-4e87-8841-5a6300168397-kube-api-access-qvc6j\") pod \"coredns-7c65d6cfc9-8cptp\" (UID: \"ee0f6e11-5e4a-4e87-8841-5a6300168397\") " pod="kube-system/coredns-7c65d6cfc9-8cptp" Sep 11 23:56:01.661194 kubelet[2633]: I0911 23:56:01.661149 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-backend-key-pair\") pod \"whisker-559987bdc9-6rbmp\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " pod="calico-system/whisker-559987bdc9-6rbmp" Sep 11 23:56:01.661194 kubelet[2633]: I0911 23:56:01.661164 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-ca-bundle\") pod \"whisker-559987bdc9-6rbmp\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " pod="calico-system/whisker-559987bdc9-6rbmp" Sep 11 23:56:01.661194 kubelet[2633]: I0911 23:56:01.661185 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee0f6e11-5e4a-4e87-8841-5a6300168397-config-volume\") pod \"coredns-7c65d6cfc9-8cptp\" (UID: \"ee0f6e11-5e4a-4e87-8841-5a6300168397\") " pod="kube-system/coredns-7c65d6cfc9-8cptp" Sep 11 23:56:01.661296 kubelet[2633]: I0911 23:56:01.661206 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1dbb72ea-13b1-45f4-b6ec-65f4f6286005-calico-apiserver-certs\") pod \"calico-apiserver-d96c64c58-dqlfz\" (UID: \"1dbb72ea-13b1-45f4-b6ec-65f4f6286005\") " pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" Sep 11 23:56:01.661296 kubelet[2633]: I0911 23:56:01.661223 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbm8n\" (UniqueName: \"kubernetes.io/projected/3ab3c824-d32f-4621-a927-bceeffc6024c-kube-api-access-wbm8n\") pod \"whisker-559987bdc9-6rbmp\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " pod="calico-system/whisker-559987bdc9-6rbmp" Sep 11 23:56:01.661296 kubelet[2633]: I0911 23:56:01.661237 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbdg\" (UniqueName: \"kubernetes.io/projected/916c0844-7b62-4825-8855-5d497b77b311-kube-api-access-wxbdg\") pod \"coredns-7c65d6cfc9-x9prl\" (UID: \"916c0844-7b62-4825-8855-5d497b77b311\") " pod="kube-system/coredns-7c65d6cfc9-x9prl" Sep 11 23:56:01.661296 kubelet[2633]: I0911 23:56:01.661273 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0cb54ed-2781-4e3d-a425-996cae24df23-config\") pod \"goldmane-7988f88666-w6bcn\" (UID: \"d0cb54ed-2781-4e3d-a425-996cae24df23\") " pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:01.661296 kubelet[2633]: I0911 23:56:01.661289 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0cb54ed-2781-4e3d-a425-996cae24df23-goldmane-ca-bundle\") pod \"goldmane-7988f88666-w6bcn\" (UID: \"d0cb54ed-2781-4e3d-a425-996cae24df23\") " pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:01.661397 kubelet[2633]: I0911 23:56:01.661306 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d0cb54ed-2781-4e3d-a425-996cae24df23-goldmane-key-pair\") pod \"goldmane-7988f88666-w6bcn\" (UID: \"d0cb54ed-2781-4e3d-a425-996cae24df23\") " pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:01.661397 kubelet[2633]: I0911 23:56:01.661324 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8c9h\" (UniqueName: \"kubernetes.io/projected/50831420-98b3-4476-b1e2-f89e1839d381-kube-api-access-h8c9h\") pod \"calico-apiserver-d96c64c58-m94tl\" (UID: \"50831420-98b3-4476-b1e2-f89e1839d381\") " pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" Sep 11 23:56:01.661397 kubelet[2633]: I0911 23:56:01.661341 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4397b26b-df1d-4a49-b10f-5c027daf1414-tigera-ca-bundle\") pod \"calico-kube-controllers-5dfccd6f76-6k2x9\" (UID: \"4397b26b-df1d-4a49-b10f-5c027daf1414\") " pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" Sep 11 23:56:01.887303 containerd[1489]: time="2025-09-11T23:56:01.887046548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfccd6f76-6k2x9,Uid:4397b26b-df1d-4a49-b10f-5c027daf1414,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:01.895736 containerd[1489]: time="2025-09-11T23:56:01.895673222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-w6bcn,Uid:d0cb54ed-2781-4e3d-a425-996cae24df23,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:01.903527 containerd[1489]: time="2025-09-11T23:56:01.903231092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-dqlfz,Uid:1dbb72ea-13b1-45f4-b6ec-65f4f6286005,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:56:01.920813 kubelet[2633]: E0911 23:56:01.920782 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:01.923056 containerd[1489]: time="2025-09-11T23:56:01.923011142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8cptp,Uid:ee0f6e11-5e4a-4e87-8841-5a6300168397,Namespace:kube-system,Attempt:0,}" Sep 11 23:56:01.923336 containerd[1489]: time="2025-09-11T23:56:01.923293514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-m94tl,Uid:50831420-98b3-4476-b1e2-f89e1839d381,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:56:01.926714 containerd[1489]: time="2025-09-11T23:56:01.926660732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559987bdc9-6rbmp,Uid:3ab3c824-d32f-4621-a927-bceeffc6024c,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:01.931707 kubelet[2633]: E0911 23:56:01.931564 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:01.932526 containerd[1489]: time="2025-09-11T23:56:01.932497531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9prl,Uid:916c0844-7b62-4825-8855-5d497b77b311,Namespace:kube-system,Attempt:0,}" Sep 11 23:56:01.997563 containerd[1489]: time="2025-09-11T23:56:01.997506156Z" level=error msg="Failed to destroy network for sandbox \"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:01.999456 containerd[1489]: time="2025-09-11T23:56:01.999423594Z" level=error msg="Failed to destroy network for sandbox \"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.001803 containerd[1489]: time="2025-09-11T23:56:02.001771049Z" level=error msg="Failed to destroy network for sandbox \"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.037211 containerd[1489]: time="2025-09-11T23:56:02.037105202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfccd6f76-6k2x9,Uid:4397b26b-df1d-4a49-b10f-5c027daf1414,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.038228 containerd[1489]: time="2025-09-11T23:56:02.038145443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-dqlfz,Uid:1dbb72ea-13b1-45f4-b6ec-65f4f6286005,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.039059 containerd[1489]: time="2025-09-11T23:56:02.039000316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-w6bcn,Uid:d0cb54ed-2781-4e3d-a425-996cae24df23,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.040099 kubelet[2633]: E0911 23:56:02.039797 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.040099 kubelet[2633]: E0911 23:56:02.039888 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:02.040099 kubelet[2633]: E0911 23:56:02.039911 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-w6bcn" Sep 11 23:56:02.040445 kubelet[2633]: E0911 23:56:02.039958 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-w6bcn_calico-system(d0cb54ed-2781-4e3d-a425-996cae24df23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-w6bcn_calico-system(d0cb54ed-2781-4e3d-a425-996cae24df23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37461eedea3ad7f0fbdef326a2b82538eb61155e67597f8f5c730bbc703558cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-w6bcn" podUID="d0cb54ed-2781-4e3d-a425-996cae24df23" Sep 11 23:56:02.040445 kubelet[2633]: E0911 23:56:02.040005 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.040445 kubelet[2633]: E0911 23:56:02.040027 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" Sep 11 23:56:02.040588 kubelet[2633]: E0911 23:56:02.040041 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" Sep 11 23:56:02.040588 kubelet[2633]: E0911 23:56:02.040068 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d96c64c58-dqlfz_calico-apiserver(1dbb72ea-13b1-45f4-b6ec-65f4f6286005)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d96c64c58-dqlfz_calico-apiserver(1dbb72ea-13b1-45f4-b6ec-65f4f6286005)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff43dc676a7f89f338484bf509cd93214da4df834d28d7da54e12421c4d8ccf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" podUID="1dbb72ea-13b1-45f4-b6ec-65f4f6286005" Sep 11 23:56:02.040588 kubelet[2633]: E0911 23:56:02.040412 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.040707 kubelet[2633]: E0911 23:56:02.040488 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" Sep 11 23:56:02.040707 kubelet[2633]: E0911 23:56:02.040504 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" Sep 11 23:56:02.040707 kubelet[2633]: E0911 23:56:02.040564 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfccd6f76-6k2x9_calico-system(4397b26b-df1d-4a49-b10f-5c027daf1414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfccd6f76-6k2x9_calico-system(4397b26b-df1d-4a49-b10f-5c027daf1414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab58b945f909c55dbdc716a2f6a75f52c2a61830dc8134e0d9b0750f138b2e08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" podUID="4397b26b-df1d-4a49-b10f-5c027daf1414" Sep 11 23:56:02.094916 containerd[1489]: time="2025-09-11T23:56:02.094864998Z" level=error msg="Failed to destroy network for sandbox \"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.095975 containerd[1489]: time="2025-09-11T23:56:02.095935080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-m94tl,Uid:50831420-98b3-4476-b1e2-f89e1839d381,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.096176 kubelet[2633]: E0911 23:56:02.096139 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.096281 kubelet[2633]: E0911 23:56:02.096197 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" Sep 11 23:56:02.096281 kubelet[2633]: E0911 23:56:02.096217 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" Sep 11 23:56:02.098349 kubelet[2633]: E0911 23:56:02.096257 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d96c64c58-m94tl_calico-apiserver(50831420-98b3-4476-b1e2-f89e1839d381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d96c64c58-m94tl_calico-apiserver(50831420-98b3-4476-b1e2-f89e1839d381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d48c4adfc0ef2438a2b58c93211dfd278a50d5dd651753d47e263ace87424acd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" podUID="50831420-98b3-4476-b1e2-f89e1839d381" Sep 11 23:56:02.103648 containerd[1489]: time="2025-09-11T23:56:02.103384094Z" level=error msg="Failed to destroy network for sandbox \"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.105853 containerd[1489]: time="2025-09-11T23:56:02.105806789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8cptp,Uid:ee0f6e11-5e4a-4e87-8841-5a6300168397,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.106077 kubelet[2633]: E0911 23:56:02.106036 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.106120 kubelet[2633]: E0911 23:56:02.106098 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8cptp" Sep 11 23:56:02.106174 kubelet[2633]: E0911 23:56:02.106117 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8cptp" Sep 11 23:56:02.106243 kubelet[2633]: E0911 23:56:02.106162 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8cptp_kube-system(ee0f6e11-5e4a-4e87-8841-5a6300168397)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8cptp_kube-system(ee0f6e11-5e4a-4e87-8841-5a6300168397)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b90200aa18930956d79c0b95edd9c962e0b622ec4e904ef94a89b86d70d57eb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8cptp" podUID="ee0f6e11-5e4a-4e87-8841-5a6300168397" Sep 11 23:56:02.109638 containerd[1489]: time="2025-09-11T23:56:02.109591539Z" level=error msg="Failed to destroy network for sandbox \"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.110423 containerd[1489]: time="2025-09-11T23:56:02.110381650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-559987bdc9-6rbmp,Uid:3ab3c824-d32f-4621-a927-bceeffc6024c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.110601 kubelet[2633]: E0911 23:56:02.110572 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.110649 kubelet[2633]: E0911 23:56:02.110622 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-559987bdc9-6rbmp" Sep 11 23:56:02.110675 kubelet[2633]: E0911 23:56:02.110654 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-559987bdc9-6rbmp" Sep 11 23:56:02.110704 kubelet[2633]: E0911 23:56:02.110684 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-559987bdc9-6rbmp_calico-system(3ab3c824-d32f-4621-a927-bceeffc6024c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-559987bdc9-6rbmp_calico-system(3ab3c824-d32f-4621-a927-bceeffc6024c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b52fea1bb5b456305879bf8070820887599db1254efbcd545785b5d1b72f09f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-559987bdc9-6rbmp" podUID="3ab3c824-d32f-4621-a927-bceeffc6024c" Sep 11 23:56:02.116617 containerd[1489]: time="2025-09-11T23:56:02.116577734Z" level=error msg="Failed to destroy network for sandbox \"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.117657 containerd[1489]: time="2025-09-11T23:56:02.117623695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9prl,Uid:916c0844-7b62-4825-8855-5d497b77b311,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.117871 kubelet[2633]: E0911 23:56:02.117815 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.117918 kubelet[2633]: E0911 23:56:02.117888 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x9prl" Sep 11 23:56:02.117918 kubelet[2633]: E0911 23:56:02.117904 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-x9prl" Sep 11 23:56:02.117961 kubelet[2633]: E0911 23:56:02.117937 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x9prl_kube-system(916c0844-7b62-4825-8855-5d497b77b311)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x9prl_kube-system(916c0844-7b62-4825-8855-5d497b77b311)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f6e7bfc2767cae1445cb5b2ef7a584de676c8ad4eed9bc36c45206a107d1654\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-x9prl" podUID="916c0844-7b62-4825-8855-5d497b77b311" Sep 11 23:56:02.369303 systemd[1]: Created slice kubepods-besteffort-podbfc260fa_d1cf_4718_a7cc_517a9bdb2fd8.slice - libcontainer container kubepods-besteffort-podbfc260fa_d1cf_4718_a7cc_517a9bdb2fd8.slice. Sep 11 23:56:02.371569 containerd[1489]: time="2025-09-11T23:56:02.371530863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtjz,Uid:bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:02.413933 containerd[1489]: time="2025-09-11T23:56:02.413878132Z" level=error msg="Failed to destroy network for sandbox \"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.415039 containerd[1489]: time="2025-09-11T23:56:02.415003696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtjz,Uid:bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.415265 kubelet[2633]: E0911 23:56:02.415206 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:56:02.415314 kubelet[2633]: E0911 23:56:02.415293 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:56:02.415337 kubelet[2633]: E0911 23:56:02.415312 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rtjz" Sep 11 23:56:02.415387 kubelet[2633]: E0911 23:56:02.415359 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8rtjz_calico-system(bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8rtjz_calico-system(bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"060537dfc00cabc9b455e65058198690a822f86df3a4114b886d337fbbe29d3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8rtjz" podUID="bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8" Sep 11 23:56:02.445678 containerd[1489]: time="2025-09-11T23:56:02.445641584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 23:56:02.825933 systemd[1]: run-netns-cni\x2dda0088ac\x2d2d8d\x2d51b0\x2d506d\x2dd97b37b9a8ae.mount: Deactivated successfully. Sep 11 23:56:02.826018 systemd[1]: run-netns-cni\x2dfd623765\x2d0dc1\x2daf30\x2df4a4\x2d26e832845704.mount: Deactivated successfully. Sep 11 23:56:02.826062 systemd[1]: run-netns-cni\x2d4757d23b\x2d474b\x2dbe94\x2d7357\x2d3c39a5e12180.mount: Deactivated successfully. Sep 11 23:56:02.826104 systemd[1]: run-netns-cni\x2dc5c72591\x2da1d1\x2d83af\x2d11af\x2dbe1e610f3989.mount: Deactivated successfully. Sep 11 23:56:06.230139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904545773.mount: Deactivated successfully. Sep 11 23:56:06.500405 containerd[1489]: time="2025-09-11T23:56:06.500276303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 11 23:56:06.504510 containerd[1489]: time="2025-09-11T23:56:06.504472925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.05879038s" Sep 11 23:56:06.504510 containerd[1489]: time="2025-09-11T23:56:06.504506527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 11 23:56:06.513413 containerd[1489]: time="2025-09-11T23:56:06.511133392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:06.513824 containerd[1489]: time="2025-09-11T23:56:06.513789603Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:06.514279 containerd[1489]: time="2025-09-11T23:56:06.514253178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:06.520911 containerd[1489]: time="2025-09-11T23:56:06.520878164Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 23:56:06.532788 containerd[1489]: time="2025-09-11T23:56:06.532257551Z" level=info msg="Container ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:06.548131 containerd[1489]: time="2025-09-11T23:56:06.547761999Z" level=info msg="CreateContainer within sandbox \"0690d3ad70c99a9c257d7def502e5920a8e1f4dfa33a89cd299f6eaf4b8c665b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\"" Sep 11 23:56:06.548454 containerd[1489]: time="2025-09-11T23:56:06.548418541Z" level=info msg="StartContainer for \"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\"" Sep 11 23:56:06.550708 containerd[1489]: time="2025-09-11T23:56:06.550657618Z" level=info msg="connecting to shim ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920" address="unix:///run/containerd/s/86e7ec49bba57509ffe96f69f4c7e880a4443912aa8f468c11f97370f6e6e3c0" protocol=ttrpc version=3 Sep 11 23:56:06.580993 systemd[1]: Started cri-containerd-ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920.scope - libcontainer container ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920. Sep 11 23:56:06.620115 containerd[1489]: time="2025-09-11T23:56:06.620072141Z" level=info msg="StartContainer for \"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\" returns successfully" Sep 11 23:56:06.749865 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 23:56:06.749958 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 23:56:06.895577 kubelet[2633]: I0911 23:56:06.894749 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbm8n\" (UniqueName: \"kubernetes.io/projected/3ab3c824-d32f-4621-a927-bceeffc6024c-kube-api-access-wbm8n\") pod \"3ab3c824-d32f-4621-a927-bceeffc6024c\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " Sep 11 23:56:06.895577 kubelet[2633]: I0911 23:56:06.894831 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-ca-bundle\") pod \"3ab3c824-d32f-4621-a927-bceeffc6024c\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " Sep 11 23:56:06.895577 kubelet[2633]: I0911 23:56:06.894880 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-backend-key-pair\") pod \"3ab3c824-d32f-4621-a927-bceeffc6024c\" (UID: \"3ab3c824-d32f-4621-a927-bceeffc6024c\") " Sep 11 23:56:06.905539 kubelet[2633]: I0911 23:56:06.905490 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3ab3c824-d32f-4621-a927-bceeffc6024c" (UID: "3ab3c824-d32f-4621-a927-bceeffc6024c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 23:56:06.910881 kubelet[2633]: I0911 23:56:06.910603 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3ab3c824-d32f-4621-a927-bceeffc6024c" (UID: "3ab3c824-d32f-4621-a927-bceeffc6024c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 23:56:06.910881 kubelet[2633]: I0911 23:56:06.910620 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab3c824-d32f-4621-a927-bceeffc6024c-kube-api-access-wbm8n" (OuterVolumeSpecName: "kube-api-access-wbm8n") pod "3ab3c824-d32f-4621-a927-bceeffc6024c" (UID: "3ab3c824-d32f-4621-a927-bceeffc6024c"). InnerVolumeSpecName "kube-api-access-wbm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 23:56:06.996037 kubelet[2633]: I0911 23:56:06.995985 2633 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbm8n\" (UniqueName: \"kubernetes.io/projected/3ab3c824-d32f-4621-a927-bceeffc6024c-kube-api-access-wbm8n\") on node \"localhost\" DevicePath \"\"" Sep 11 23:56:06.996228 kubelet[2633]: I0911 23:56:06.996057 2633 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 23:56:06.996228 kubelet[2633]: I0911 23:56:06.996076 2633 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ab3c824-d32f-4621-a927-bceeffc6024c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 23:56:07.230808 systemd[1]: var-lib-kubelet-pods-3ab3c824\x2dd32f\x2d4621\x2da927\x2dbceeffc6024c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbm8n.mount: Deactivated successfully. Sep 11 23:56:07.230902 systemd[1]: var-lib-kubelet-pods-3ab3c824\x2dd32f\x2d4621\x2da927\x2dbceeffc6024c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 23:56:07.372551 systemd[1]: Removed slice kubepods-besteffort-pod3ab3c824_d32f_4621_a927_bceeffc6024c.slice - libcontainer container kubepods-besteffort-pod3ab3c824_d32f_4621_a927_bceeffc6024c.slice. Sep 11 23:56:07.538650 kubelet[2633]: I0911 23:56:07.538584 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-979xz" podStartSLOduration=1.683622122 podStartE2EDuration="13.538560676s" podCreationTimestamp="2025-09-11 23:55:54 +0000 UTC" firstStartedPulling="2025-09-11 23:55:54.650238075 +0000 UTC m=+21.379575730" lastFinishedPulling="2025-09-11 23:56:06.505176629 +0000 UTC m=+33.234514284" observedRunningTime="2025-09-11 23:56:07.528097211 +0000 UTC m=+34.257434866" watchObservedRunningTime="2025-09-11 23:56:07.538560676 +0000 UTC m=+34.267898331" Sep 11 23:56:07.599162 kubelet[2633]: I0911 23:56:07.599120 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/618c1755-3b1c-4cd9-926d-4e0a5e41487c-whisker-ca-bundle\") pod \"whisker-6f845bb6c8-kjt5c\" (UID: \"618c1755-3b1c-4cd9-926d-4e0a5e41487c\") " pod="calico-system/whisker-6f845bb6c8-kjt5c" Sep 11 23:56:07.599162 kubelet[2633]: I0911 23:56:07.599201 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/618c1755-3b1c-4cd9-926d-4e0a5e41487c-whisker-backend-key-pair\") pod \"whisker-6f845bb6c8-kjt5c\" (UID: \"618c1755-3b1c-4cd9-926d-4e0a5e41487c\") " pod="calico-system/whisker-6f845bb6c8-kjt5c" Sep 11 23:56:07.599162 kubelet[2633]: I0911 23:56:07.599230 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lcdc\" (UniqueName: \"kubernetes.io/projected/618c1755-3b1c-4cd9-926d-4e0a5e41487c-kube-api-access-4lcdc\") pod \"whisker-6f845bb6c8-kjt5c\" (UID: \"618c1755-3b1c-4cd9-926d-4e0a5e41487c\") " pod="calico-system/whisker-6f845bb6c8-kjt5c" Sep 11 23:56:07.604148 systemd[1]: Created slice kubepods-besteffort-pod618c1755_3b1c_4cd9_926d_4e0a5e41487c.slice - libcontainer container kubepods-besteffort-pod618c1755_3b1c_4cd9_926d_4e0a5e41487c.slice. Sep 11 23:56:07.658517 containerd[1489]: time="2025-09-11T23:56:07.658458981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\" id:\"3f34ec6b2bdf9025558b9b7df9dac0364cdff3f574028bb8ba7b51e9f26c49f3\" pid:3817 exit_status:1 exited_at:{seconds:1757634967 nanos:657778638}" Sep 11 23:56:07.911521 containerd[1489]: time="2025-09-11T23:56:07.911403783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f845bb6c8-kjt5c,Uid:618c1755-3b1c-4cd9-926d-4e0a5e41487c,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:08.159912 systemd-networkd[1429]: cali28cd4702e04: Link UP Sep 11 23:56:08.160891 systemd-networkd[1429]: cali28cd4702e04: Gained carrier Sep 11 23:56:08.178163 containerd[1489]: 2025-09-11 23:56:07.987 [INFO][3834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:56:08.178163 containerd[1489]: 2025-09-11 23:56:08.015 [INFO][3834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0 whisker-6f845bb6c8- calico-system 618c1755-3b1c-4cd9-926d-4e0a5e41487c 873 0 2025-09-11 23:56:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f845bb6c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f845bb6c8-kjt5c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28cd4702e04 [] [] }} ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-" Sep 11 23:56:08.178163 containerd[1489]: 2025-09-11 23:56:08.015 [INFO][3834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.178163 containerd[1489]: 2025-09-11 23:56:08.103 [INFO][3848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" HandleID="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Workload="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.104 [INFO][3848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" HandleID="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Workload="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391c50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f845bb6c8-kjt5c", "timestamp":"2025-09-11 23:56:08.103945328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.104 [INFO][3848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.104 [INFO][3848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.104 [INFO][3848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.118 [INFO][3848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" host="localhost" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.127 [INFO][3848] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.134 [INFO][3848] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.136 [INFO][3848] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.138 [INFO][3848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:08.178943 containerd[1489]: 2025-09-11 23:56:08.138 [INFO][3848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" host="localhost" Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.140 [INFO][3848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7 Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.145 [INFO][3848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" host="localhost" Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.150 [INFO][3848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" host="localhost" Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.150 [INFO][3848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" host="localhost" Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.150 [INFO][3848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:08.179184 containerd[1489]: 2025-09-11 23:56:08.150 [INFO][3848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" HandleID="k8s-pod-network.6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Workload="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.179292 containerd[1489]: 2025-09-11 23:56:08.152 [INFO][3834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0", GenerateName:"whisker-6f845bb6c8-", Namespace:"calico-system", SelfLink:"", UID:"618c1755-3b1c-4cd9-926d-4e0a5e41487c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f845bb6c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f845bb6c8-kjt5c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28cd4702e04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:08.179292 containerd[1489]: 2025-09-11 23:56:08.152 [INFO][3834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.179357 containerd[1489]: 2025-09-11 23:56:08.153 [INFO][3834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28cd4702e04 ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.179357 containerd[1489]: 2025-09-11 23:56:08.160 [INFO][3834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.179396 containerd[1489]: 2025-09-11 23:56:08.161 [INFO][3834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0", GenerateName:"whisker-6f845bb6c8-", Namespace:"calico-system", SelfLink:"", UID:"618c1755-3b1c-4cd9-926d-4e0a5e41487c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 56, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f845bb6c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7", Pod:"whisker-6f845bb6c8-kjt5c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28cd4702e04", MAC:"b6:61:6e:be:06:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:08.179444 containerd[1489]: 2025-09-11 23:56:08.172 [INFO][3834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" Namespace="calico-system" Pod="whisker-6f845bb6c8-kjt5c" WorkloadEndpoint="localhost-k8s-whisker--6f845bb6c8--kjt5c-eth0" Sep 11 23:56:08.222462 containerd[1489]: time="2025-09-11T23:56:08.222417020Z" level=info msg="connecting to shim 6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7" address="unix:///run/containerd/s/d0d05502c4a91b18552b53fb29b032b979583b44e758d931b1679169d383e728" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:08.252925 systemd[1]: Started cri-containerd-6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7.scope - libcontainer container 6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7. Sep 11 23:56:08.265261 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:08.285183 containerd[1489]: time="2025-09-11T23:56:08.285037853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f845bb6c8-kjt5c,Uid:618c1755-3b1c-4cd9-926d-4e0a5e41487c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7\"" Sep 11 23:56:08.289261 containerd[1489]: time="2025-09-11T23:56:08.289196305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 23:56:08.577226 containerd[1489]: time="2025-09-11T23:56:08.577187913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\" id:\"7b4cf279073b23ff3798f08271f740819caae044457a7d6b9c128f7936577971\" pid:4026 exit_status:1 exited_at:{seconds:1757634968 nanos:576864503}" Sep 11 23:56:09.362277 kubelet[2633]: I0911 23:56:09.362241 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab3c824-d32f-4621-a927-bceeffc6024c" path="/var/lib/kubelet/pods/3ab3c824-d32f-4621-a927-bceeffc6024c/volumes" Sep 11 23:56:09.779979 systemd-networkd[1429]: cali28cd4702e04: Gained IPv6LL Sep 11 23:56:09.834774 containerd[1489]: time="2025-09-11T23:56:09.834546385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:09.835288 containerd[1489]: time="2025-09-11T23:56:09.835258087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 11 23:56:09.836270 containerd[1489]: time="2025-09-11T23:56:09.836242318Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:09.838177 containerd[1489]: time="2025-09-11T23:56:09.838140736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:09.838916 containerd[1489]: time="2025-09-11T23:56:09.838805997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.549561889s" Sep 11 23:56:09.838916 containerd[1489]: time="2025-09-11T23:56:09.838837518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 11 23:56:09.845294 containerd[1489]: time="2025-09-11T23:56:09.845264556Z" level=info msg="CreateContainer within sandbox \"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 23:56:09.852886 containerd[1489]: time="2025-09-11T23:56:09.852836149Z" level=info msg="Container 3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:09.860263 containerd[1489]: time="2025-09-11T23:56:09.860225097Z" level=info msg="CreateContainer within sandbox \"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7\"" Sep 11 23:56:09.860754 containerd[1489]: time="2025-09-11T23:56:09.860712792Z" level=info msg="StartContainer for \"3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7\"" Sep 11 23:56:09.861991 containerd[1489]: time="2025-09-11T23:56:09.861936670Z" level=info msg="connecting to shim 3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7" address="unix:///run/containerd/s/d0d05502c4a91b18552b53fb29b032b979583b44e758d931b1679169d383e728" protocol=ttrpc version=3 Sep 11 23:56:09.889027 systemd[1]: Started cri-containerd-3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7.scope - libcontainer container 3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7. Sep 11 23:56:09.923043 containerd[1489]: time="2025-09-11T23:56:09.923008153Z" level=info msg="StartContainer for \"3146d07780934e4976276a41acddc3e87aa891c20ebffc879b3a6c31c842f5f7\" returns successfully" Sep 11 23:56:09.924965 containerd[1489]: time="2025-09-11T23:56:09.924896771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 23:56:11.689402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60709453.mount: Deactivated successfully. Sep 11 23:56:11.705542 containerd[1489]: time="2025-09-11T23:56:11.705499529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:11.706299 containerd[1489]: time="2025-09-11T23:56:11.706041985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 11 23:56:11.706986 containerd[1489]: time="2025-09-11T23:56:11.706962572Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:11.709383 containerd[1489]: time="2025-09-11T23:56:11.709169156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:11.710147 containerd[1489]: time="2025-09-11T23:56:11.710107983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.785180371s" Sep 11 23:56:11.710195 containerd[1489]: time="2025-09-11T23:56:11.710155705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 11 23:56:11.713686 containerd[1489]: time="2025-09-11T23:56:11.713235794Z" level=info msg="CreateContainer within sandbox \"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 23:56:11.722132 containerd[1489]: time="2025-09-11T23:56:11.722092251Z" level=info msg="Container b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:11.740483 containerd[1489]: time="2025-09-11T23:56:11.740433583Z" level=info msg="CreateContainer within sandbox \"6ef56583533bebd7dfccc67ac5d10fa683f7f8ffd03e50583f711b233a7208e7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2\"" Sep 11 23:56:11.741265 containerd[1489]: time="2025-09-11T23:56:11.741237366Z" level=info msg="StartContainer for \"b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2\"" Sep 11 23:56:11.742911 containerd[1489]: time="2025-09-11T23:56:11.742879014Z" level=info msg="connecting to shim b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2" address="unix:///run/containerd/s/d0d05502c4a91b18552b53fb29b032b979583b44e758d931b1679169d383e728" protocol=ttrpc version=3 Sep 11 23:56:11.768957 systemd[1]: Started cri-containerd-b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2.scope - libcontainer container b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2. Sep 11 23:56:11.803439 containerd[1489]: time="2025-09-11T23:56:11.803395490Z" level=info msg="StartContainer for \"b5b1441b978e623148741b90d15fa19f2c7e67b6c1539793880f651df59977a2\" returns successfully" Sep 11 23:56:12.536635 kubelet[2633]: I0911 23:56:12.536543 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f845bb6c8-kjt5c" podStartSLOduration=2.114191198 podStartE2EDuration="5.536522718s" podCreationTimestamp="2025-09-11 23:56:07 +0000 UTC" firstStartedPulling="2025-09-11 23:56:08.288776332 +0000 UTC m=+35.018113987" lastFinishedPulling="2025-09-11 23:56:11.711107852 +0000 UTC m=+38.440445507" observedRunningTime="2025-09-11 23:56:12.535051956 +0000 UTC m=+39.264389611" watchObservedRunningTime="2025-09-11 23:56:12.536522718 +0000 UTC m=+39.265860373" Sep 11 23:56:13.360842 containerd[1489]: time="2025-09-11T23:56:13.360770712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtjz,Uid:bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:13.360842 containerd[1489]: time="2025-09-11T23:56:13.360828874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-m94tl,Uid:50831420-98b3-4476-b1e2-f89e1839d381,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:56:13.561838 systemd-networkd[1429]: cali300372d5136: Link UP Sep 11 23:56:13.562018 systemd-networkd[1429]: cali300372d5136: Gained carrier Sep 11 23:56:13.585885 containerd[1489]: 2025-09-11 23:56:13.457 [INFO][4221] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:56:13.585885 containerd[1489]: 2025-09-11 23:56:13.481 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8rtjz-eth0 csi-node-driver- calico-system bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8 705 0 2025-09-11 23:55:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8rtjz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali300372d5136 [] [] }} ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-" Sep 11 23:56:13.585885 containerd[1489]: 2025-09-11 23:56:13.481 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.585885 containerd[1489]: 2025-09-11 23:56:13.512 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" HandleID="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Workload="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.512 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" HandleID="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Workload="localhost-k8s-csi--node--driver--8rtjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8rtjz", "timestamp":"2025-09-11 23:56:13.512264145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.512 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.512 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.512 [INFO][4257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.524 [INFO][4257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" host="localhost" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.529 [INFO][4257] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.534 [INFO][4257] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.536 [INFO][4257] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.539 [INFO][4257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:13.586120 containerd[1489]: 2025-09-11 23:56:13.539 [INFO][4257] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" host="localhost" Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.541 [INFO][4257] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49 Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.546 [INFO][4257] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" host="localhost" Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.552 [INFO][4257] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" host="localhost" Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.552 [INFO][4257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" host="localhost" Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.552 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:13.586345 containerd[1489]: 2025-09-11 23:56:13.553 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" HandleID="k8s-pod-network.4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Workload="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.586467 containerd[1489]: 2025-09-11 23:56:13.557 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8rtjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8rtjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali300372d5136", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:13.586520 containerd[1489]: 2025-09-11 23:56:13.557 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.586520 containerd[1489]: 2025-09-11 23:56:13.557 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali300372d5136 ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.586520 containerd[1489]: 2025-09-11 23:56:13.563 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.586589 containerd[1489]: 2025-09-11 23:56:13.566 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8rtjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49", Pod:"csi-node-driver-8rtjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali300372d5136", MAC:"5a:cc:22:18:68:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:13.586636 containerd[1489]: 2025-09-11 23:56:13.579 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" Namespace="calico-system" Pod="csi-node-driver-8rtjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--8rtjz-eth0" Sep 11 23:56:13.640792 containerd[1489]: time="2025-09-11T23:56:13.640323496Z" level=info msg="connecting to shim 4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49" address="unix:///run/containerd/s/a1162a8bb25ecf3f93e3d289d7a63e9ccb08e447d1bdfd7614b76779cc93378c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:13.688107 systemd-networkd[1429]: calib92eb8a6873: Link UP Sep 11 23:56:13.688260 systemd-networkd[1429]: calib92eb8a6873: Gained carrier Sep 11 23:56:13.692962 systemd[1]: Started cri-containerd-4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49.scope - libcontainer container 4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49. Sep 11 23:56:13.708635 containerd[1489]: 2025-09-11 23:56:13.458 [INFO][4228] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:56:13.708635 containerd[1489]: 2025-09-11 23:56:13.476 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0 calico-apiserver-d96c64c58- calico-apiserver 50831420-98b3-4476-b1e2-f89e1839d381 810 0 2025-09-11 23:55:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d96c64c58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d96c64c58-m94tl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib92eb8a6873 [] [] }} ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-" Sep 11 23:56:13.708635 containerd[1489]: 2025-09-11 23:56:13.476 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.708635 containerd[1489]: 2025-09-11 23:56:13.514 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" HandleID="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Workload="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.514 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" HandleID="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Workload="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001367f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d96c64c58-m94tl", "timestamp":"2025-09-11 23:56:13.514640731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.514 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.552 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.553 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.629 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" host="localhost" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.636 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.643 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.646 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.650 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:13.708871 containerd[1489]: 2025-09-11 23:56:13.650 [INFO][4251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" host="localhost" Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.653 [INFO][4251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7 Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.663 [INFO][4251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" host="localhost" Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.678 [INFO][4251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" host="localhost" Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.678 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" host="localhost" Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.678 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:13.709089 containerd[1489]: 2025-09-11 23:56:13.678 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" HandleID="k8s-pod-network.cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Workload="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.709203 containerd[1489]: 2025-09-11 23:56:13.684 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0", GenerateName:"calico-apiserver-d96c64c58-", Namespace:"calico-apiserver", SelfLink:"", UID:"50831420-98b3-4476-b1e2-f89e1839d381", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d96c64c58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d96c64c58-m94tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib92eb8a6873", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:13.709255 containerd[1489]: 2025-09-11 23:56:13.685 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.709255 containerd[1489]: 2025-09-11 23:56:13.685 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib92eb8a6873 ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.709255 containerd[1489]: 2025-09-11 23:56:13.687 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.709315 containerd[1489]: 2025-09-11 23:56:13.687 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0", GenerateName:"calico-apiserver-d96c64c58-", Namespace:"calico-apiserver", SelfLink:"", UID:"50831420-98b3-4476-b1e2-f89e1839d381", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d96c64c58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7", Pod:"calico-apiserver-d96c64c58-m94tl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib92eb8a6873", MAC:"16:7c:10:72:f8:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:13.709360 containerd[1489]: 2025-09-11 23:56:13.701 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-m94tl" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--m94tl-eth0" Sep 11 23:56:13.713675 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:13.731538 containerd[1489]: time="2025-09-11T23:56:13.731297190Z" level=info msg="connecting to shim cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7" address="unix:///run/containerd/s/b4969fe59265bcb8389fca96bf09870524a68880486abcae2008425e42bf7fad" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:13.732708 containerd[1489]: time="2025-09-11T23:56:13.732661547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtjz,Uid:bfc260fa-d1cf-4718-a7cc-517a9bdb2fd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49\"" Sep 11 23:56:13.738998 containerd[1489]: time="2025-09-11T23:56:13.738958120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 23:56:13.756943 systemd[1]: Started cri-containerd-cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7.scope - libcontainer container cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7. Sep 11 23:56:13.768303 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:13.788839 containerd[1489]: time="2025-09-11T23:56:13.788799486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-m94tl,Uid:50831420-98b3-4476-b1e2-f89e1839d381,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7\"" Sep 11 23:56:14.359783 kubelet[2633]: E0911 23:56:14.359539 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:14.360490 containerd[1489]: time="2025-09-11T23:56:14.359931443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9prl,Uid:916c0844-7b62-4825-8855-5d497b77b311,Namespace:kube-system,Attempt:0,}" Sep 11 23:56:14.492985 systemd-networkd[1429]: cali7f9ff86d2fc: Link UP Sep 11 23:56:14.493933 systemd-networkd[1429]: cali7f9ff86d2fc: Gained carrier Sep 11 23:56:14.518268 containerd[1489]: 2025-09-11 23:56:14.380 [INFO][4401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:56:14.518268 containerd[1489]: 2025-09-11 23:56:14.393 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0 coredns-7c65d6cfc9- kube-system 916c0844-7b62-4825-8855-5d497b77b311 806 0 2025-09-11 23:55:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-x9prl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f9ff86d2fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-" Sep 11 23:56:14.518268 containerd[1489]: 2025-09-11 23:56:14.394 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.518268 containerd[1489]: 2025-09-11 23:56:14.421 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" HandleID="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Workload="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.422 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" HandleID="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Workload="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-x9prl", "timestamp":"2025-09-11 23:56:14.421845895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.422 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.422 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.422 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.437 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" host="localhost" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.459 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.467 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.470 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.473 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:14.518695 containerd[1489]: 2025-09-11 23:56:14.473 [INFO][4417] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" host="localhost" Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.474 [INFO][4417] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.480 [INFO][4417] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" host="localhost" Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.488 [INFO][4417] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" host="localhost" Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.488 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" host="localhost" Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.488 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:14.518934 containerd[1489]: 2025-09-11 23:56:14.488 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" HandleID="k8s-pod-network.11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Workload="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.519046 containerd[1489]: 2025-09-11 23:56:14.490 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"916c0844-7b62-4825-8855-5d497b77b311", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-x9prl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9ff86d2fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:14.519112 containerd[1489]: 2025-09-11 23:56:14.490 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.519112 containerd[1489]: 2025-09-11 23:56:14.490 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f9ff86d2fc ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.519112 containerd[1489]: 2025-09-11 23:56:14.493 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.519230 containerd[1489]: 2025-09-11 23:56:14.494 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"916c0844-7b62-4825-8855-5d497b77b311", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e", Pod:"coredns-7c65d6cfc9-x9prl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f9ff86d2fc", MAC:"b6:c8:80:47:77:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:14.519230 containerd[1489]: 2025-09-11 23:56:14.515 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-x9prl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--x9prl-eth0" Sep 11 23:56:14.546703 kubelet[2633]: I0911 23:56:14.546662 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:56:14.547619 kubelet[2633]: E0911 23:56:14.547581 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:14.611514 containerd[1489]: time="2025-09-11T23:56:14.610765257Z" level=info msg="connecting to shim 11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e" address="unix:///run/containerd/s/5209f1ef862763c766db374b1fdec1cc494d843c55e6f7132ca2f6a7872454b7" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:14.637962 systemd[1]: Started cri-containerd-11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e.scope - libcontainer container 11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e. Sep 11 23:56:14.651300 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:14.677337 containerd[1489]: time="2025-09-11T23:56:14.677271912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x9prl,Uid:916c0844-7b62-4825-8855-5d497b77b311,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e\"" Sep 11 23:56:14.678775 kubelet[2633]: E0911 23:56:14.678553 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:14.682504 containerd[1489]: time="2025-09-11T23:56:14.682439850Z" level=info msg="CreateContainer within sandbox \"11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:56:14.703964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262700002.mount: Deactivated successfully. Sep 11 23:56:14.705593 containerd[1489]: time="2025-09-11T23:56:14.705081894Z" level=info msg="Container 19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:14.720235 containerd[1489]: time="2025-09-11T23:56:14.720155016Z" level=info msg="CreateContainer within sandbox \"11f958fb628a0b38db6315e0ceb6f3e597d65c62c7eef20f9ecfaead3d61700e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723\"" Sep 11 23:56:14.722994 containerd[1489]: time="2025-09-11T23:56:14.722948211Z" level=info msg="StartContainer for \"19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723\"" Sep 11 23:56:14.724498 containerd[1489]: time="2025-09-11T23:56:14.724461931Z" level=info msg="connecting to shim 19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723" address="unix:///run/containerd/s/5209f1ef862763c766db374b1fdec1cc494d843c55e6f7132ca2f6a7872454b7" protocol=ttrpc version=3 Sep 11 23:56:14.781131 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Sep 11 23:56:14.796955 systemd[1]: Started cri-containerd-19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723.scope - libcontainer container 19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723. Sep 11 23:56:14.871839 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:14.872628 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:14.881181 systemd-logind[1471]: New session 8 of user core. Sep 11 23:56:14.885983 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 23:56:14.893092 containerd[1489]: time="2025-09-11T23:56:14.893036110Z" level=info msg="StartContainer for \"19ef83c9700cf09e3ed9bfc51e939a05f11e36790d74f7d42b9deb37cda84723\" returns successfully" Sep 11 23:56:15.006766 containerd[1489]: time="2025-09-11T23:56:15.005897679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:15.006766 containerd[1489]: time="2025-09-11T23:56:15.006310810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 11 23:56:15.007341 containerd[1489]: time="2025-09-11T23:56:15.007304076Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:15.012803 containerd[1489]: time="2025-09-11T23:56:15.012728777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:15.013499 containerd[1489]: time="2025-09-11T23:56:15.013454396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.274309431s" Sep 11 23:56:15.013499 containerd[1489]: time="2025-09-11T23:56:15.013495517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 11 23:56:15.015640 containerd[1489]: time="2025-09-11T23:56:15.015596892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 23:56:15.016947 containerd[1489]: time="2025-09-11T23:56:15.016908406Z" level=info msg="CreateContainer within sandbox \"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 23:56:15.028551 systemd-networkd[1429]: cali300372d5136: Gained IPv6LL Sep 11 23:56:15.030559 containerd[1489]: time="2025-09-11T23:56:15.030507439Z" level=info msg="Container c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:15.040298 containerd[1489]: time="2025-09-11T23:56:15.040251213Z" level=info msg="CreateContainer within sandbox \"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56\"" Sep 11 23:56:15.041240 containerd[1489]: time="2025-09-11T23:56:15.041206318Z" level=info msg="StartContainer for \"c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56\"" Sep 11 23:56:15.043024 containerd[1489]: time="2025-09-11T23:56:15.042989924Z" level=info msg="connecting to shim c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56" address="unix:///run/containerd/s/a1162a8bb25ecf3f93e3d289d7a63e9ccb08e447d1bdfd7614b76779cc93378c" protocol=ttrpc version=3 Sep 11 23:56:15.069144 systemd[1]: Started cri-containerd-c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56.scope - libcontainer container c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56. Sep 11 23:56:15.117827 containerd[1489]: time="2025-09-11T23:56:15.117783509Z" level=info msg="StartContainer for \"c49d80feb1e0e611765acec4cf0ad0ff0e167856fc9d9668f52d2b7493573a56\" returns successfully" Sep 11 23:56:15.135291 sshd[4535]: Connection closed by 10.0.0.1 port 51932 Sep 11 23:56:15.135838 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:15.139917 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:51932.service: Deactivated successfully. Sep 11 23:56:15.141662 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 23:56:15.142440 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Sep 11 23:56:15.143842 systemd-logind[1471]: Removed session 8. Sep 11 23:56:15.360764 containerd[1489]: time="2025-09-11T23:56:15.360418540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-w6bcn,Uid:d0cb54ed-2781-4e3d-a425-996cae24df23,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:15.474988 systemd-networkd[1429]: calib92eb8a6873: Gained IPv6LL Sep 11 23:56:15.513399 systemd-networkd[1429]: calied0b1145673: Link UP Sep 11 23:56:15.513991 systemd-networkd[1429]: calied0b1145673: Gained carrier Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.385 [INFO][4608] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.402 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--w6bcn-eth0 goldmane-7988f88666- calico-system d0cb54ed-2781-4e3d-a425-996cae24df23 809 0 2025-09-11 23:55:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-w6bcn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calied0b1145673 [] [] }} ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.402 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.429 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" HandleID="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Workload="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.430 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" HandleID="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Workload="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3d60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-w6bcn", "timestamp":"2025-09-11 23:56:15.429823945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.430 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.430 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.430 [INFO][4623] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.449 [INFO][4623] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.478 [INFO][4623] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.486 [INFO][4623] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.489 [INFO][4623] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.495 [INFO][4623] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.495 [INFO][4623] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.497 [INFO][4623] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095 Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.502 [INFO][4623] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.508 [INFO][4623] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.508 [INFO][4623] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" host="localhost" Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.508 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:15.530040 containerd[1489]: 2025-09-11 23:56:15.508 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" HandleID="k8s-pod-network.fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Workload="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.511 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--w6bcn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d0cb54ed-2781-4e3d-a425-996cae24df23", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-w6bcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied0b1145673", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.511 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.511 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0b1145673 ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.514 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.514 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--w6bcn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d0cb54ed-2781-4e3d-a425-996cae24df23", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095", Pod:"goldmane-7988f88666-w6bcn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied0b1145673", MAC:"8e:a7:f6:59:2c:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:15.530807 containerd[1489]: 2025-09-11 23:56:15.524 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" Namespace="calico-system" Pod="goldmane-7988f88666-w6bcn" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--w6bcn-eth0" Sep 11 23:56:15.541008 kubelet[2633]: E0911 23:56:15.540962 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:15.545097 kubelet[2633]: E0911 23:56:15.544885 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:15.557407 kubelet[2633]: I0911 23:56:15.556961 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x9prl" podStartSLOduration=35.556943171 podStartE2EDuration="35.556943171s" podCreationTimestamp="2025-09-11 23:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:56:15.555951465 +0000 UTC m=+42.285289280" watchObservedRunningTime="2025-09-11 23:56:15.556943171 +0000 UTC m=+42.286280826" Sep 11 23:56:15.564770 containerd[1489]: time="2025-09-11T23:56:15.564675412Z" level=info msg="connecting to shim fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095" address="unix:///run/containerd/s/dee029b66c2e4767eabe18e572dec251f27e4ee423ef3a3b86b605b609bfbd5b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:15.607315 systemd[1]: Started cri-containerd-fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095.scope - libcontainer container fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095. Sep 11 23:56:15.622023 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:15.654579 containerd[1489]: time="2025-09-11T23:56:15.654524188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-w6bcn,Uid:d0cb54ed-2781-4e3d-a425-996cae24df23,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095\"" Sep 11 23:56:15.831108 systemd-networkd[1429]: vxlan.calico: Link UP Sep 11 23:56:15.831116 systemd-networkd[1429]: vxlan.calico: Gained carrier Sep 11 23:56:15.922926 systemd-networkd[1429]: cali7f9ff86d2fc: Gained IPv6LL Sep 11 23:56:16.359904 kubelet[2633]: E0911 23:56:16.359778 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:16.361040 containerd[1489]: time="2025-09-11T23:56:16.360156032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8cptp,Uid:ee0f6e11-5e4a-4e87-8841-5a6300168397,Namespace:kube-system,Attempt:0,}" Sep 11 23:56:16.385083 containerd[1489]: time="2025-09-11T23:56:16.385041823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfccd6f76-6k2x9,Uid:4397b26b-df1d-4a49-b10f-5c027daf1414,Namespace:calico-system,Attempt:0,}" Sep 11 23:56:16.515821 systemd-networkd[1429]: cali3803a3cbb29: Link UP Sep 11 23:56:16.518345 systemd-networkd[1429]: cali3803a3cbb29: Gained carrier Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.427 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0 coredns-7c65d6cfc9- kube-system ee0f6e11-5e4a-4e87-8841-5a6300168397 807 0 2025-09-11 23:55:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8cptp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3803a3cbb29 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.427 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.458 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" HandleID="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Workload="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.458 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" HandleID="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Workload="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004929a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8cptp", "timestamp":"2025-09-11 23:56:16.458781014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.458 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.459 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.459 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.471 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.477 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.482 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.484 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.487 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.487 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.488 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34 Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.499 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.506 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.506 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" host="localhost" Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.508 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:16.540791 containerd[1489]: 2025-09-11 23:56:16.508 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" HandleID="k8s-pod-network.e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Workload="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.511 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ee0f6e11-5e4a-4e87-8841-5a6300168397", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8cptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3803a3cbb29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.511 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.511 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3803a3cbb29 ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.518 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.519 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ee0f6e11-5e4a-4e87-8841-5a6300168397", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34", Pod:"coredns-7c65d6cfc9-8cptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3803a3cbb29", MAC:"22:87:1e:01:8b:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:16.541525 containerd[1489]: 2025-09-11 23:56:16.535 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8cptp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8cptp-eth0" Sep 11 23:56:16.551526 kubelet[2633]: E0911 23:56:16.551473 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:16.568846 containerd[1489]: time="2025-09-11T23:56:16.568799005Z" level=info msg="connecting to shim e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34" address="unix:///run/containerd/s/d07ecc96c4f801c5f781d82cb55000033064bf788e07f962728acf67948d2fc2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:16.614950 systemd[1]: Started cri-containerd-e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34.scope - libcontainer container e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34. Sep 11 23:56:16.619630 systemd-networkd[1429]: cali99f8c1a1cd4: Link UP Sep 11 23:56:16.620589 systemd-networkd[1429]: cali99f8c1a1cd4: Gained carrier Sep 11 23:56:16.636200 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.435 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0 calico-kube-controllers-5dfccd6f76- calico-system 4397b26b-df1d-4a49-b10f-5c027daf1414 800 0 2025-09-11 23:55:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dfccd6f76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5dfccd6f76-6k2x9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali99f8c1a1cd4 [] [] }} ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.436 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.479 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" HandleID="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Workload="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.480 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" HandleID="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Workload="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5dfccd6f76-6k2x9", "timestamp":"2025-09-11 23:56:16.479865909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.480 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.507 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.508 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.573 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.580 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.585 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.588 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.594 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.595 [INFO][4821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.597 [INFO][4821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9 Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.601 [INFO][4821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.610 [INFO][4821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.610 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" host="localhost" Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.610 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:16.644768 containerd[1489]: 2025-09-11 23:56:16.610 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" HandleID="k8s-pod-network.609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Workload="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.613 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0", GenerateName:"calico-kube-controllers-5dfccd6f76-", Namespace:"calico-system", SelfLink:"", UID:"4397b26b-df1d-4a49-b10f-5c027daf1414", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfccd6f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5dfccd6f76-6k2x9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99f8c1a1cd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.613 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.614 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99f8c1a1cd4 ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.621 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.622 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0", GenerateName:"calico-kube-controllers-5dfccd6f76-", Namespace:"calico-system", SelfLink:"", UID:"4397b26b-df1d-4a49-b10f-5c027daf1414", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfccd6f76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9", Pod:"calico-kube-controllers-5dfccd6f76-6k2x9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99f8c1a1cd4", MAC:"6a:0c:a3:dc:a1:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:16.645530 containerd[1489]: 2025-09-11 23:56:16.637 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" Namespace="calico-system" Pod="calico-kube-controllers-5dfccd6f76-6k2x9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dfccd6f76--6k2x9-eth0" Sep 11 23:56:16.670891 containerd[1489]: time="2025-09-11T23:56:16.670833193Z" level=info msg="connecting to shim 609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9" address="unix:///run/containerd/s/6d85f1bb8e05bbd1c31a25671fbacbdd159883a8bdf2964a068f86569c4f0f3b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:16.682068 containerd[1489]: time="2025-09-11T23:56:16.682020957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8cptp,Uid:ee0f6e11-5e4a-4e87-8841-5a6300168397,Namespace:kube-system,Attempt:0,} returns sandbox id \"e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34\"" Sep 11 23:56:16.682855 kubelet[2633]: E0911 23:56:16.682811 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:16.687515 containerd[1489]: time="2025-09-11T23:56:16.687399654Z" level=info msg="CreateContainer within sandbox \"e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:56:16.691099 systemd-networkd[1429]: calied0b1145673: Gained IPv6LL Sep 11 23:56:16.704636 containerd[1489]: time="2025-09-11T23:56:16.704197240Z" level=info msg="Container e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:16.718466 containerd[1489]: time="2025-09-11T23:56:16.716512992Z" level=info msg="CreateContainer within sandbox \"e73e052a81fb12a96ec2217abe17daf4cbe12ececb070ab2f19c198b366cae34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a\"" Sep 11 23:56:16.718069 systemd[1]: Started cri-containerd-609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9.scope - libcontainer container 609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9. Sep 11 23:56:16.719370 containerd[1489]: time="2025-09-11T23:56:16.719166860Z" level=info msg="StartContainer for \"e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a\"" Sep 11 23:56:16.722945 containerd[1489]: time="2025-09-11T23:56:16.722632708Z" level=info msg="connecting to shim e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a" address="unix:///run/containerd/s/d07ecc96c4f801c5f781d82cb55000033064bf788e07f962728acf67948d2fc2" protocol=ttrpc version=3 Sep 11 23:56:16.741584 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:16.756205 systemd[1]: Started cri-containerd-e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a.scope - libcontainer container e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a. Sep 11 23:56:16.789647 containerd[1489]: time="2025-09-11T23:56:16.789603647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfccd6f76-6k2x9,Uid:4397b26b-df1d-4a49-b10f-5c027daf1414,Namespace:calico-system,Attempt:0,} returns sandbox id \"609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9\"" Sep 11 23:56:16.807607 containerd[1489]: time="2025-09-11T23:56:16.807561422Z" level=info msg="StartContainer for \"e659abe7e734627a17c5eb8075ce4a1f96ed2c211062aef2a33617dae526ba7a\" returns successfully" Sep 11 23:56:17.196134 containerd[1489]: time="2025-09-11T23:56:17.196028241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:17.196769 containerd[1489]: time="2025-09-11T23:56:17.196670417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 11 23:56:17.197900 containerd[1489]: time="2025-09-11T23:56:17.197545159Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:17.201247 containerd[1489]: time="2025-09-11T23:56:17.201194089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:17.201809 containerd[1489]: time="2025-09-11T23:56:17.201682221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.186048929s" Sep 11 23:56:17.201809 containerd[1489]: time="2025-09-11T23:56:17.201713902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 23:56:17.204207 containerd[1489]: time="2025-09-11T23:56:17.204141362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 23:56:17.205979 containerd[1489]: time="2025-09-11T23:56:17.205900886Z" level=info msg="CreateContainer within sandbox \"cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:56:17.213802 containerd[1489]: time="2025-09-11T23:56:17.213756800Z" level=info msg="Container 05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:17.220257 containerd[1489]: time="2025-09-11T23:56:17.220204600Z" level=info msg="CreateContainer within sandbox \"cc55aa557b01e2db6e95a0dba6415e004e3c7b9ac4778ee5caf6d1b7c3fe14e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e\"" Sep 11 23:56:17.221051 containerd[1489]: time="2025-09-11T23:56:17.221019220Z" level=info msg="StartContainer for \"05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e\"" Sep 11 23:56:17.222365 containerd[1489]: time="2025-09-11T23:56:17.222328693Z" level=info msg="connecting to shim 05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e" address="unix:///run/containerd/s/b4969fe59265bcb8389fca96bf09870524a68880486abcae2008425e42bf7fad" protocol=ttrpc version=3 Sep 11 23:56:17.242941 systemd[1]: Started cri-containerd-05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e.scope - libcontainer container 05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e. Sep 11 23:56:17.283870 containerd[1489]: time="2025-09-11T23:56:17.283829976Z" level=info msg="StartContainer for \"05c768308d757ef07be516cc30948b3e7e5ddc653339252cd668640f08543b4e\" returns successfully" Sep 11 23:56:17.361580 containerd[1489]: time="2025-09-11T23:56:17.361542541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-dqlfz,Uid:1dbb72ea-13b1-45f4-b6ec-65f4f6286005,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:56:17.482225 systemd-networkd[1429]: calibef1fff2766: Link UP Sep 11 23:56:17.482643 systemd-networkd[1429]: calibef1fff2766: Gained carrier Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.402 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0 calico-apiserver-d96c64c58- calico-apiserver 1dbb72ea-13b1-45f4-b6ec-65f4f6286005 808 0 2025-09-11 23:55:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d96c64c58 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d96c64c58-dqlfz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibef1fff2766 [] [] }} ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.402 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.431 [INFO][5036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" HandleID="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Workload="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.431 [INFO][5036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" HandleID="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Workload="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011e500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d96c64c58-dqlfz", "timestamp":"2025-09-11 23:56:17.431370351 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.431 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.431 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.431 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.443 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.448 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.452 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.454 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.457 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.457 [INFO][5036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.459 [INFO][5036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.465 [INFO][5036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.476 [INFO][5036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.476 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" host="localhost" Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.476 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:56:17.501887 containerd[1489]: 2025-09-11 23:56:17.476 [INFO][5036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" HandleID="k8s-pod-network.372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Workload="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.479 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0", GenerateName:"calico-apiserver-d96c64c58-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dbb72ea-13b1-45f4-b6ec-65f4f6286005", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d96c64c58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d96c64c58-dqlfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibef1fff2766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.479 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.479 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibef1fff2766 ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.482 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.483 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0", GenerateName:"calico-apiserver-d96c64c58-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dbb72ea-13b1-45f4-b6ec-65f4f6286005", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d96c64c58", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d", Pod:"calico-apiserver-d96c64c58-dqlfz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibef1fff2766", MAC:"aa:c1:cf:5b:45:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:56:17.502453 containerd[1489]: 2025-09-11 23:56:17.497 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" Namespace="calico-apiserver" Pod="calico-apiserver-d96c64c58-dqlfz" WorkloadEndpoint="localhost-k8s-calico--apiserver--d96c64c58--dqlfz-eth0" Sep 11 23:56:17.526251 containerd[1489]: time="2025-09-11T23:56:17.526152939Z" level=info msg="connecting to shim 372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d" address="unix:///run/containerd/s/44f5a1ece0dfd900116abae89ea104895586238035a7e599a6e1471e7d3aa10b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:56:17.555131 kubelet[2633]: E0911 23:56:17.555085 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:17.557160 systemd[1]: Started cri-containerd-372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d.scope - libcontainer container 372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d. Sep 11 23:56:17.570714 kubelet[2633]: E0911 23:56:17.570665 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:17.578442 kubelet[2633]: I0911 23:56:17.578349 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8cptp" podStartSLOduration=37.578331032 podStartE2EDuration="37.578331032s" podCreationTimestamp="2025-09-11 23:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:56:17.577653935 +0000 UTC m=+44.306991550" watchObservedRunningTime="2025-09-11 23:56:17.578331032 +0000 UTC m=+44.307668687" Sep 11 23:56:17.604809 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:56:17.647425 containerd[1489]: time="2025-09-11T23:56:17.647372782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d96c64c58-dqlfz,Uid:1dbb72ea-13b1-45f4-b6ec-65f4f6286005,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d\"" Sep 11 23:56:17.650428 containerd[1489]: time="2025-09-11T23:56:17.650385857Z" level=info msg="CreateContainer within sandbox \"372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:56:17.650855 systemd-networkd[1429]: cali3803a3cbb29: Gained IPv6LL Sep 11 23:56:17.661153 containerd[1489]: time="2025-09-11T23:56:17.661068881Z" level=info msg="Container ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:17.665240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032114308.mount: Deactivated successfully. Sep 11 23:56:17.672071 containerd[1489]: time="2025-09-11T23:56:17.672030193Z" level=info msg="CreateContainer within sandbox \"372f801e72460ca671143fd386a6b7b2bb337dec03f18d511cf97bf69151378d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d\"" Sep 11 23:56:17.672952 containerd[1489]: time="2025-09-11T23:56:17.672923255Z" level=info msg="StartContainer for \"ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d\"" Sep 11 23:56:17.674151 containerd[1489]: time="2025-09-11T23:56:17.674112324Z" level=info msg="connecting to shim ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d" address="unix:///run/containerd/s/44f5a1ece0dfd900116abae89ea104895586238035a7e599a6e1471e7d3aa10b" protocol=ttrpc version=3 Sep 11 23:56:17.698148 systemd[1]: Started cri-containerd-ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d.scope - libcontainer container ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d. Sep 11 23:56:17.740141 containerd[1489]: time="2025-09-11T23:56:17.739925795Z" level=info msg="StartContainer for \"ec40864b308b851abc66dbf8bc82684f087c4488f45550515288fedaedc6411d\" returns successfully" Sep 11 23:56:17.842907 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL Sep 11 23:56:18.163067 systemd-networkd[1429]: cali99f8c1a1cd4: Gained IPv6LL Sep 11 23:56:18.602753 kubelet[2633]: E0911 23:56:18.602545 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:18.621089 kubelet[2633]: I0911 23:56:18.621028 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d96c64c58-m94tl" podStartSLOduration=26.207942778 podStartE2EDuration="29.621008914s" podCreationTimestamp="2025-09-11 23:55:49 +0000 UTC" firstStartedPulling="2025-09-11 23:56:13.790296167 +0000 UTC m=+40.519633782" lastFinishedPulling="2025-09-11 23:56:17.203362263 +0000 UTC m=+43.932699918" observedRunningTime="2025-09-11 23:56:17.608632502 +0000 UTC m=+44.337970157" watchObservedRunningTime="2025-09-11 23:56:18.621008914 +0000 UTC m=+45.350346569" Sep 11 23:56:18.623033 kubelet[2633]: I0911 23:56:18.622972 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d96c64c58-dqlfz" podStartSLOduration=29.622960881 podStartE2EDuration="29.622960881s" podCreationTimestamp="2025-09-11 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:56:18.621169158 +0000 UTC m=+45.350506813" watchObservedRunningTime="2025-09-11 23:56:18.622960881 +0000 UTC m=+45.352298536" Sep 11 23:56:18.774093 containerd[1489]: time="2025-09-11T23:56:18.774031179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:18.775443 containerd[1489]: time="2025-09-11T23:56:18.775362971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 11 23:56:18.776430 containerd[1489]: time="2025-09-11T23:56:18.776394636Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:18.778464 containerd[1489]: time="2025-09-11T23:56:18.778415485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:18.779039 containerd[1489]: time="2025-09-11T23:56:18.778998299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.574808296s" Sep 11 23:56:18.779086 containerd[1489]: time="2025-09-11T23:56:18.779035460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 11 23:56:18.781978 containerd[1489]: time="2025-09-11T23:56:18.781904089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 23:56:18.783700 containerd[1489]: time="2025-09-11T23:56:18.783651971Z" level=info msg="CreateContainer within sandbox \"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 23:56:18.803766 containerd[1489]: time="2025-09-11T23:56:18.802924798Z" level=info msg="Container 4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:18.876924 containerd[1489]: time="2025-09-11T23:56:18.876274334Z" level=info msg="CreateContainer within sandbox \"4e35db83ea393d44cbfdd999d18986b3b45fea0c008e59fc80655505be11dd49\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a\"" Sep 11 23:56:18.877758 containerd[1489]: time="2025-09-11T23:56:18.877187556Z" level=info msg="StartContainer for \"4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a\"" Sep 11 23:56:18.878901 containerd[1489]: time="2025-09-11T23:56:18.878874717Z" level=info msg="connecting to shim 4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a" address="unix:///run/containerd/s/a1162a8bb25ecf3f93e3d289d7a63e9ccb08e447d1bdfd7614b76779cc93378c" protocol=ttrpc version=3 Sep 11 23:56:18.905438 systemd[1]: Started cri-containerd-4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a.scope - libcontainer container 4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a. Sep 11 23:56:18.955126 containerd[1489]: time="2025-09-11T23:56:18.955076042Z" level=info msg="StartContainer for \"4e6d2a4afcf094e0ecfdc0e2e03f0be6e43254c960ec81dd5bdc0d4db8388b3a\" returns successfully" Sep 11 23:56:19.122980 systemd-networkd[1429]: calibef1fff2766: Gained IPv6LL Sep 11 23:56:19.455571 kubelet[2633]: I0911 23:56:19.455515 2633 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 23:56:19.455707 kubelet[2633]: I0911 23:56:19.455650 2633 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 23:56:19.607654 kubelet[2633]: E0911 23:56:19.607621 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:19.608020 kubelet[2633]: I0911 23:56:19.607778 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:56:19.624369 kubelet[2633]: I0911 23:56:19.624283 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8rtjz" podStartSLOduration=20.577970532 podStartE2EDuration="25.624249316s" podCreationTimestamp="2025-09-11 23:55:54 +0000 UTC" firstStartedPulling="2025-09-11 23:56:13.734543479 +0000 UTC m=+40.463881134" lastFinishedPulling="2025-09-11 23:56:18.780822263 +0000 UTC m=+45.510159918" observedRunningTime="2025-09-11 23:56:19.623460018 +0000 UTC m=+46.352797673" watchObservedRunningTime="2025-09-11 23:56:19.624249316 +0000 UTC m=+46.353586971" Sep 11 23:56:20.148547 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:45946.service - OpenSSH per-connection server daemon (10.0.0.1:45946). Sep 11 23:56:20.243108 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 45946 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:20.243959 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:20.258682 systemd-logind[1471]: New session 9 of user core. Sep 11 23:56:20.266111 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 23:56:20.596277 sshd[5192]: Connection closed by 10.0.0.1 port 45946 Sep 11 23:56:20.596928 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:20.601903 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:45946.service: Deactivated successfully. Sep 11 23:56:20.603701 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 23:56:20.605259 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Sep 11 23:56:20.607216 systemd-logind[1471]: Removed session 9. Sep 11 23:56:20.917055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721891896.mount: Deactivated successfully. Sep 11 23:56:21.474685 containerd[1489]: time="2025-09-11T23:56:21.474303393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:21.476144 containerd[1489]: time="2025-09-11T23:56:21.476101634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 11 23:56:21.476923 containerd[1489]: time="2025-09-11T23:56:21.476896092Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:21.481156 containerd[1489]: time="2025-09-11T23:56:21.481113948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:21.481612 containerd[1489]: time="2025-09-11T23:56:21.481578079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.699624108s" Sep 11 23:56:21.481655 containerd[1489]: time="2025-09-11T23:56:21.481616599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 11 23:56:21.483812 containerd[1489]: time="2025-09-11T23:56:21.482662103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 23:56:21.483894 containerd[1489]: time="2025-09-11T23:56:21.483450361Z" level=info msg="CreateContainer within sandbox \"fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 23:56:21.496660 containerd[1489]: time="2025-09-11T23:56:21.495476274Z" level=info msg="Container ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:21.507001 containerd[1489]: time="2025-09-11T23:56:21.506955575Z" level=info msg="CreateContainer within sandbox \"fb022cb0a278f2fb2ee176d188a9107ec2be4937bb8ac70b3123e216eebfd095\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\"" Sep 11 23:56:21.507676 containerd[1489]: time="2025-09-11T23:56:21.507632911Z" level=info msg="StartContainer for \"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\"" Sep 11 23:56:21.509152 containerd[1489]: time="2025-09-11T23:56:21.509116345Z" level=info msg="connecting to shim ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4" address="unix:///run/containerd/s/dee029b66c2e4767eabe18e572dec251f27e4ee423ef3a3b86b605b609bfbd5b" protocol=ttrpc version=3 Sep 11 23:56:21.536962 systemd[1]: Started cri-containerd-ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4.scope - libcontainer container ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4. Sep 11 23:56:21.591482 containerd[1489]: time="2025-09-11T23:56:21.591442216Z" level=info msg="StartContainer for \"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\" returns successfully" Sep 11 23:56:21.742572 containerd[1489]: time="2025-09-11T23:56:21.742414248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\" id:\"b6ff91489501a2f1eefbbbf6e52818912dd09580646063ec1277e0a544aa484e\" pid:5265 exit_status:1 exited_at:{seconds:1757634981 nanos:741981998}" Sep 11 23:56:22.693966 containerd[1489]: time="2025-09-11T23:56:22.693899577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\" id:\"a3d7f7015ded52c038054cb45379c502f92188c01b54f2f1e1fc828d622cde75\" pid:5297 exit_status:1 exited_at:{seconds:1757634982 nanos:693302324}" Sep 11 23:56:23.713056 containerd[1489]: time="2025-09-11T23:56:23.712961771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:23.714031 containerd[1489]: time="2025-09-11T23:56:23.713916672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 11 23:56:23.714957 containerd[1489]: time="2025-09-11T23:56:23.714925894Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:23.717249 containerd[1489]: time="2025-09-11T23:56:23.717207464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:56:23.717794 containerd[1489]: time="2025-09-11T23:56:23.717767197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.233942227s" Sep 11 23:56:23.717841 containerd[1489]: time="2025-09-11T23:56:23.717798117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 11 23:56:23.732673 containerd[1489]: time="2025-09-11T23:56:23.732624882Z" level=info msg="CreateContainer within sandbox \"609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 23:56:23.740101 containerd[1489]: time="2025-09-11T23:56:23.739581274Z" level=info msg="Container 7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:56:23.756418 containerd[1489]: time="2025-09-11T23:56:23.756358361Z" level=info msg="CreateContainer within sandbox \"609214d0dbde73dee1e330e98d2811809457575343f1dc559597c199d3ae72d9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548\"" Sep 11 23:56:23.758921 containerd[1489]: time="2025-09-11T23:56:23.757433665Z" level=info msg="StartContainer for \"7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548\"" Sep 11 23:56:23.758921 containerd[1489]: time="2025-09-11T23:56:23.758488848Z" level=info msg="connecting to shim 7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548" address="unix:///run/containerd/s/6d85f1bb8e05bbd1c31a25671fbacbdd159883a8bdf2964a068f86569c4f0f3b" protocol=ttrpc version=3 Sep 11 23:56:23.790978 systemd[1]: Started cri-containerd-7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548.scope - libcontainer container 7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548. Sep 11 23:56:23.830869 containerd[1489]: time="2025-09-11T23:56:23.830808511Z" level=info msg="StartContainer for \"7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548\" returns successfully" Sep 11 23:56:24.675894 containerd[1489]: time="2025-09-11T23:56:24.675823274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548\" id:\"539160aaec7a7a9321a7ce6e67dd8dc51807b45f7bedf86ffb221d2a1bef0d7a\" pid:5374 exited_at:{seconds:1757634984 nanos:673626547}" Sep 11 23:56:24.685598 kubelet[2633]: I0911 23:56:24.685357 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-w6bcn" podStartSLOduration=24.861998968 podStartE2EDuration="30.685337399s" podCreationTimestamp="2025-09-11 23:55:54 +0000 UTC" firstStartedPulling="2025-09-11 23:56:15.659022985 +0000 UTC m=+42.388360600" lastFinishedPulling="2025-09-11 23:56:21.482361376 +0000 UTC m=+48.211699031" observedRunningTime="2025-09-11 23:56:21.648033862 +0000 UTC m=+48.377371557" watchObservedRunningTime="2025-09-11 23:56:24.685337399 +0000 UTC m=+51.414675094" Sep 11 23:56:24.700956 kubelet[2633]: I0911 23:56:24.700388 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dfccd6f76-6k2x9" podStartSLOduration=23.771048358 podStartE2EDuration="30.700369882s" podCreationTimestamp="2025-09-11 23:55:54 +0000 UTC" firstStartedPulling="2025-09-11 23:56:16.792836209 +0000 UTC m=+43.522173864" lastFinishedPulling="2025-09-11 23:56:23.722157733 +0000 UTC m=+50.451495388" observedRunningTime="2025-09-11 23:56:24.685068953 +0000 UTC m=+51.414406608" watchObservedRunningTime="2025-09-11 23:56:24.700369882 +0000 UTC m=+51.429707537" Sep 11 23:56:25.618427 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:45956.service - OpenSSH per-connection server daemon (10.0.0.1:45956). Sep 11 23:56:25.684307 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 45956 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:25.685904 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:25.690631 systemd-logind[1471]: New session 10 of user core. Sep 11 23:56:25.694897 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 23:56:25.894855 sshd[5388]: Connection closed by 10.0.0.1 port 45956 Sep 11 23:56:25.895781 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:25.903218 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:45956.service: Deactivated successfully. Sep 11 23:56:25.904978 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 23:56:25.906370 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Sep 11 23:56:25.909361 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:45958.service - OpenSSH per-connection server daemon (10.0.0.1:45958). Sep 11 23:56:25.911468 systemd-logind[1471]: Removed session 10. Sep 11 23:56:25.966785 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 45958 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:25.967268 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:25.971818 systemd-logind[1471]: New session 11 of user core. Sep 11 23:56:25.978913 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 23:56:26.225060 sshd[5406]: Connection closed by 10.0.0.1 port 45958 Sep 11 23:56:26.225848 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:26.241196 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:45958.service: Deactivated successfully. Sep 11 23:56:26.246696 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 23:56:26.250976 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Sep 11 23:56:26.256156 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:45962.service - OpenSSH per-connection server daemon (10.0.0.1:45962). Sep 11 23:56:26.258279 systemd-logind[1471]: Removed session 11. Sep 11 23:56:26.311253 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 45962 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:26.312938 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:26.316746 systemd-logind[1471]: New session 12 of user core. Sep 11 23:56:26.330927 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 23:56:26.460187 sshd[5431]: Connection closed by 10.0.0.1 port 45962 Sep 11 23:56:26.461729 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:26.465092 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:45962.service: Deactivated successfully. Sep 11 23:56:26.468361 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 23:56:26.470287 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Sep 11 23:56:26.471511 systemd-logind[1471]: Removed session 12. Sep 11 23:56:29.385531 kubelet[2633]: I0911 23:56:29.385393 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:56:31.476950 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:44422.service - OpenSSH per-connection server daemon (10.0.0.1:44422). Sep 11 23:56:31.530759 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 44422 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:31.528891 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:31.534713 systemd-logind[1471]: New session 13 of user core. Sep 11 23:56:31.550945 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 23:56:31.682501 sshd[5451]: Connection closed by 10.0.0.1 port 44422 Sep 11 23:56:31.681347 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:31.689914 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:44422.service: Deactivated successfully. Sep 11 23:56:31.691977 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 23:56:31.694645 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Sep 11 23:56:31.696608 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:44424.service - OpenSSH per-connection server daemon (10.0.0.1:44424). Sep 11 23:56:31.697971 systemd-logind[1471]: Removed session 13. Sep 11 23:56:31.762484 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 44424 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:31.765339 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:31.774356 systemd-logind[1471]: New session 14 of user core. Sep 11 23:56:31.778923 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 23:56:32.025633 sshd[5467]: Connection closed by 10.0.0.1 port 44424 Sep 11 23:56:32.026052 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:32.037497 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:44424.service: Deactivated successfully. Sep 11 23:56:32.040001 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 23:56:32.041022 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Sep 11 23:56:32.043687 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:44428.service - OpenSSH per-connection server daemon (10.0.0.1:44428). Sep 11 23:56:32.045209 systemd-logind[1471]: Removed session 14. Sep 11 23:56:32.101236 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 44428 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:32.102620 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:32.107831 systemd-logind[1471]: New session 15 of user core. Sep 11 23:56:32.114909 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 23:56:33.775623 sshd[5482]: Connection closed by 10.0.0.1 port 44428 Sep 11 23:56:33.777082 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:33.789146 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:44428.service: Deactivated successfully. Sep 11 23:56:33.791026 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 23:56:33.791235 systemd[1]: session-15.scope: Consumed 580ms CPU time, 73.6M memory peak. Sep 11 23:56:33.793675 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Sep 11 23:56:33.796536 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:44440.service - OpenSSH per-connection server daemon (10.0.0.1:44440). Sep 11 23:56:33.798128 systemd-logind[1471]: Removed session 15. Sep 11 23:56:33.857193 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 44440 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:33.858770 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:33.863119 systemd-logind[1471]: New session 16 of user core. Sep 11 23:56:33.869921 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 23:56:34.198614 sshd[5506]: Connection closed by 10.0.0.1 port 44440 Sep 11 23:56:34.200116 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:34.207179 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:44440.service: Deactivated successfully. Sep 11 23:56:34.210972 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 23:56:34.212487 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Sep 11 23:56:34.216493 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:44452.service - OpenSSH per-connection server daemon (10.0.0.1:44452). Sep 11 23:56:34.218100 systemd-logind[1471]: Removed session 16. Sep 11 23:56:34.276123 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 44452 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:34.278005 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:34.282494 systemd-logind[1471]: New session 17 of user core. Sep 11 23:56:34.293940 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 23:56:34.417214 sshd[5521]: Connection closed by 10.0.0.1 port 44452 Sep 11 23:56:34.417617 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:34.422108 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:44452.service: Deactivated successfully. Sep 11 23:56:34.424218 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 23:56:34.425807 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Sep 11 23:56:34.427303 systemd-logind[1471]: Removed session 17. Sep 11 23:56:35.305760 containerd[1489]: time="2025-09-11T23:56:35.305677017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec5aecfa04dee9d191d12fd3e0d4cc5495f30a9ef11b6251ca3405011a4fa920\" id:\"6c97c1fb0c7733feddc1be97d30d9220039735191f08505835ca8cbb6c2fb235\" pid:5545 exited_at:{seconds:1757634995 nanos:305323690}" Sep 11 23:56:37.221473 containerd[1489]: time="2025-09-11T23:56:37.221428660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d08af26a448ef205cc219137ee9d1528ae2df0bd8a5bd4f1c1257483414d548\" id:\"6236f725b66405d98a443c878394d6c7bec17c284f5619c7aee1131c27587a5c\" pid:5578 exited_at:{seconds:1757634997 nanos:221165935}" Sep 11 23:56:39.433637 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:44462.service - OpenSSH per-connection server daemon (10.0.0.1:44462). Sep 11 23:56:39.495167 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 44462 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:39.496623 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:39.501138 systemd-logind[1471]: New session 18 of user core. Sep 11 23:56:39.510958 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 23:56:39.650854 sshd[5597]: Connection closed by 10.0.0.1 port 44462 Sep 11 23:56:39.651217 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:39.655154 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:44462.service: Deactivated successfully. Sep 11 23:56:39.657364 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 23:56:39.659377 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Sep 11 23:56:39.660580 systemd-logind[1471]: Removed session 18. Sep 11 23:56:44.665008 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:49808.service - OpenSSH per-connection server daemon (10.0.0.1:49808). Sep 11 23:56:44.752631 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 49808 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:44.754498 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:44.765073 systemd-logind[1471]: New session 19 of user core. Sep 11 23:56:44.777964 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 23:56:45.002902 sshd[5619]: Connection closed by 10.0.0.1 port 49808 Sep 11 23:56:45.003520 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:45.009427 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:49808.service: Deactivated successfully. Sep 11 23:56:45.012608 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 23:56:45.014245 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Sep 11 23:56:45.016406 systemd-logind[1471]: Removed session 19. Sep 11 23:56:46.265997 containerd[1489]: time="2025-09-11T23:56:46.265953894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\" id:\"3dc17916de0e9853cf7c64ef63681281cf718464a36f9a3d4a53378f22b62c23\" pid:5643 exited_at:{seconds:1757635006 nanos:265698452}" Sep 11 23:56:49.382736 containerd[1489]: time="2025-09-11T23:56:49.382624747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebadb65e891bfe7085e1b7598d6006c0ad013ef90cb9d9a7bb51a3b558f638c4\" id:\"afd3ba4325a9f00e32372696afdaf71ee89d46b2c117ea346bf78c57a97bfca9\" pid:5668 exited_at:{seconds:1757635009 nanos:382321905}" Sep 11 23:56:50.017986 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:40646.service - OpenSSH per-connection server daemon (10.0.0.1:40646). Sep 11 23:56:50.071896 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:pULdEgqoZ1CjXpNcHD/2mxhbP7BalAGKKlfd6deKmwI Sep 11 23:56:50.073543 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:56:50.081508 systemd-logind[1471]: New session 20 of user core. Sep 11 23:56:50.091957 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 23:56:50.247266 sshd[5686]: Connection closed by 10.0.0.1 port 40646 Sep 11 23:56:50.248030 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Sep 11 23:56:50.253065 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:40646.service: Deactivated successfully. Sep 11 23:56:50.255056 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 23:56:50.257357 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Sep 11 23:56:50.258443 systemd-logind[1471]: Removed session 20. Sep 11 23:56:51.359847 kubelet[2633]: E0911 23:56:51.359782 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:56:51.359847 kubelet[2633]: E0911 23:56:51.359810 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"