Sep 11 23:30:55.740594 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 23:30:55.740615 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Sep 11 22:19:25 -00 2025 Sep 11 23:30:55.740625 kernel: KASLR enabled Sep 11 23:30:55.740630 kernel: efi: EFI v2.7 by EDK II Sep 11 23:30:55.740636 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 23:30:55.740641 kernel: random: crng init done Sep 11 23:30:55.740647 kernel: secureboot: Secure boot disabled Sep 11 23:30:55.740653 kernel: ACPI: Early table checksum verification disabled Sep 11 23:30:55.740659 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 23:30:55.740665 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 23:30:55.740671 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740677 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740682 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740688 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740695 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740710 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740717 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740723 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740729 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 23:30:55.740735 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 23:30:55.740741 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 23:30:55.740747 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:30:55.740753 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 23:30:55.740759 kernel: Zone ranges: Sep 11 23:30:55.740765 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:30:55.740785 kernel: DMA32 empty Sep 11 23:30:55.740791 kernel: Normal empty Sep 11 23:30:55.740797 kernel: Device empty Sep 11 23:30:55.740804 kernel: Movable zone start for each node Sep 11 23:30:55.740810 kernel: Early memory node ranges Sep 11 23:30:55.740816 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 23:30:55.740822 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 23:30:55.740828 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 23:30:55.740833 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 23:30:55.740839 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 23:30:55.740845 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 23:30:55.740851 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 23:30:55.740859 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 23:30:55.740864 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 23:30:55.740870 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 23:30:55.740879 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 23:30:55.740885 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 23:30:55.740891 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 23:30:55.740899 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 23:30:55.740905 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 23:30:55.740911 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 23:30:55.740918 kernel: psci: probing for conduit method from ACPI. Sep 11 23:30:55.740924 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 23:30:55.740931 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 23:30:55.740937 kernel: psci: Trusted OS migration not required Sep 11 23:30:55.740943 kernel: psci: SMC Calling Convention v1.1 Sep 11 23:30:55.740950 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 23:30:55.740956 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 23:30:55.740963 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 23:30:55.740970 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 23:30:55.740976 kernel: Detected PIPT I-cache on CPU0 Sep 11 23:30:55.740982 kernel: CPU features: detected: GIC system register CPU interface Sep 11 23:30:55.740989 kernel: CPU features: detected: Spectre-v4 Sep 11 23:30:55.740995 kernel: CPU features: detected: Spectre-BHB Sep 11 23:30:55.741001 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 23:30:55.741007 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 23:30:55.741013 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 23:30:55.741020 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 23:30:55.741026 kernel: alternatives: applying boot alternatives Sep 11 23:30:55.741033 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=482086cf30ef24f68ac7a1ade8cef289f4704fd240e7f8a80dce8eef21953880 Sep 11 23:30:55.741041 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 23:30:55.741047 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 23:30:55.741054 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 23:30:55.741060 kernel: Fallback order for Node 0: 0 Sep 11 23:30:55.741066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 23:30:55.741072 kernel: Policy zone: DMA Sep 11 23:30:55.741078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 23:30:55.741085 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 23:30:55.741091 kernel: software IO TLB: area num 4. Sep 11 23:30:55.741097 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 23:30:55.741104 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 23:30:55.741111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 23:30:55.741118 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 23:30:55.741125 kernel: rcu: RCU event tracing is enabled. Sep 11 23:30:55.741131 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 23:30:55.741138 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 23:30:55.741144 kernel: Tracing variant of Tasks RCU enabled. Sep 11 23:30:55.741151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 23:30:55.741157 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 23:30:55.741163 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:30:55.741170 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 23:30:55.741176 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 23:30:55.741184 kernel: GICv3: 256 SPIs implemented Sep 11 23:30:55.741190 kernel: GICv3: 0 Extended SPIs implemented Sep 11 23:30:55.741196 kernel: Root IRQ handler: gic_handle_irq Sep 11 23:30:55.741203 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 23:30:55.741209 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 23:30:55.741215 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 23:30:55.741221 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 23:30:55.741228 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 23:30:55.741234 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 23:30:55.741241 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 23:30:55.741247 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 23:30:55.741253 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 23:30:55.741261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:30:55.741267 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 23:30:55.741274 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 23:30:55.741280 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 23:30:55.741287 kernel: arm-pv: using stolen time PV Sep 11 23:30:55.741294 kernel: Console: colour dummy device 80x25 Sep 11 23:30:55.741301 kernel: ACPI: Core revision 20240827 Sep 11 23:30:55.741308 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 23:30:55.741323 kernel: pid_max: default: 32768 minimum: 301 Sep 11 23:30:55.741342 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 23:30:55.741351 kernel: landlock: Up and running. Sep 11 23:30:55.741357 kernel: SELinux: Initializing. Sep 11 23:30:55.741364 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:30:55.741371 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 23:30:55.741378 kernel: rcu: Hierarchical SRCU implementation. Sep 11 23:30:55.741384 kernel: rcu: Max phase no-delay instances is 400. Sep 11 23:30:55.741391 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 23:30:55.741397 kernel: Remapping and enabling EFI services. Sep 11 23:30:55.741404 kernel: smp: Bringing up secondary CPUs ... Sep 11 23:30:55.741417 kernel: Detected PIPT I-cache on CPU1 Sep 11 23:30:55.741424 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 23:30:55.741431 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 23:30:55.741443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:30:55.741450 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 23:30:55.741457 kernel: Detected PIPT I-cache on CPU2 Sep 11 23:30:55.741464 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 23:30:55.741471 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 23:30:55.741479 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:30:55.741486 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 23:30:55.741493 kernel: Detected PIPT I-cache on CPU3 Sep 11 23:30:55.741499 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 23:30:55.741506 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 23:30:55.741513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 23:30:55.741520 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 23:30:55.741527 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 23:30:55.741540 kernel: SMP: Total of 4 processors activated. Sep 11 23:30:55.741548 kernel: CPU: All CPU(s) started at EL1 Sep 11 23:30:55.741555 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 23:30:55.741562 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 23:30:55.741569 kernel: CPU features: detected: Common not Private translations Sep 11 23:30:55.741575 kernel: CPU features: detected: CRC32 instructions Sep 11 23:30:55.741582 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 23:30:55.741589 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 23:30:55.741596 kernel: CPU features: detected: LSE atomic instructions Sep 11 23:30:55.741602 kernel: CPU features: detected: Privileged Access Never Sep 11 23:30:55.741611 kernel: CPU features: detected: RAS Extension Support Sep 11 23:30:55.741617 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 23:30:55.741624 kernel: alternatives: applying system-wide alternatives Sep 11 23:30:55.741631 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 23:30:55.741638 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 11 23:30:55.741645 kernel: devtmpfs: initialized Sep 11 23:30:55.741652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 23:30:55.741659 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 23:30:55.741666 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 23:30:55.741676 kernel: 0 pages in range for non-PLT usage Sep 11 23:30:55.741683 kernel: 508560 pages in range for PLT usage Sep 11 23:30:55.741689 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 23:30:55.741696 kernel: SMBIOS 3.0.0 present. Sep 11 23:30:55.741707 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 23:30:55.741714 kernel: DMI: Memory slots populated: 1/1 Sep 11 23:30:55.741721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 23:30:55.741728 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 23:30:55.741735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 23:30:55.741743 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 23:30:55.741750 kernel: audit: initializing netlink subsys (disabled) Sep 11 23:30:55.741757 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Sep 11 23:30:55.741764 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 23:30:55.741770 kernel: cpuidle: using governor menu Sep 11 23:30:55.741777 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 23:30:55.741784 kernel: ASID allocator initialised with 32768 entries Sep 11 23:30:55.741791 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 23:30:55.741797 kernel: Serial: AMBA PL011 UART driver Sep 11 23:30:55.741806 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 23:30:55.741812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 23:30:55.741819 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 23:30:55.741826 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 23:30:55.741833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 23:30:55.741840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 23:30:55.741846 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 23:30:55.741853 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 23:30:55.741860 kernel: ACPI: Added _OSI(Module Device) Sep 11 23:30:55.741866 kernel: ACPI: Added _OSI(Processor Device) Sep 11 23:30:55.741875 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 23:30:55.741881 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 23:30:55.741888 kernel: ACPI: Interpreter enabled Sep 11 23:30:55.741895 kernel: ACPI: Using GIC for interrupt routing Sep 11 23:30:55.741901 kernel: ACPI: MCFG table detected, 1 entries Sep 11 23:30:55.741908 kernel: ACPI: CPU0 has been hot-added Sep 11 23:30:55.741915 kernel: ACPI: CPU1 has been hot-added Sep 11 23:30:55.741921 kernel: ACPI: CPU2 has been hot-added Sep 11 23:30:55.741928 kernel: ACPI: CPU3 has been hot-added Sep 11 23:30:55.741936 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 23:30:55.741943 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 23:30:55.741950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 23:30:55.742081 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 23:30:55.742160 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 23:30:55.742219 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 23:30:55.742277 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 23:30:55.742368 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 23:30:55.742378 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 23:30:55.742386 kernel: PCI host bridge to bus 0000:00 Sep 11 23:30:55.742456 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 23:30:55.742510 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 23:30:55.742563 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 23:30:55.742615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 23:30:55.742695 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 23:30:55.742780 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 23:30:55.742842 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 23:30:55.742902 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 23:30:55.742962 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 23:30:55.743021 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 23:30:55.743080 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 23:30:55.743154 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 23:30:55.743208 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 23:30:55.743260 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 23:30:55.743325 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 23:30:55.743335 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 23:30:55.743342 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 23:30:55.743349 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 23:30:55.743359 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 23:30:55.743365 kernel: iommu: Default domain type: Translated Sep 11 23:30:55.743372 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 23:30:55.743379 kernel: efivars: Registered efivars operations Sep 11 23:30:55.743386 kernel: vgaarb: loaded Sep 11 23:30:55.743392 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 23:30:55.743399 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 23:30:55.743406 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 23:30:55.743413 kernel: pnp: PnP ACPI init Sep 11 23:30:55.743485 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 23:30:55.743495 kernel: pnp: PnP ACPI: found 1 devices Sep 11 23:30:55.743502 kernel: NET: Registered PF_INET protocol family Sep 11 23:30:55.743509 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 23:30:55.743516 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 23:30:55.743523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 23:30:55.743530 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 23:30:55.743543 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 23:30:55.743552 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 23:30:55.743559 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:30:55.743566 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 23:30:55.743573 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 23:30:55.743580 kernel: PCI: CLS 0 bytes, default 64 Sep 11 23:30:55.743587 kernel: kvm [1]: HYP mode not available Sep 11 23:30:55.743594 kernel: Initialise system trusted keyrings Sep 11 23:30:55.743601 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 23:30:55.743607 kernel: Key type asymmetric registered Sep 11 23:30:55.743615 kernel: Asymmetric key parser 'x509' registered Sep 11 23:30:55.743622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 23:30:55.743629 kernel: io scheduler mq-deadline registered Sep 11 23:30:55.743636 kernel: io scheduler kyber registered Sep 11 23:30:55.743642 kernel: io scheduler bfq registered Sep 11 23:30:55.743650 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 23:30:55.743657 kernel: ACPI: button: Power Button [PWRB] Sep 11 23:30:55.743664 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 23:30:55.743736 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 23:30:55.743745 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 23:30:55.743754 kernel: thunder_xcv, ver 1.0 Sep 11 23:30:55.743761 kernel: thunder_bgx, ver 1.0 Sep 11 23:30:55.743768 kernel: nicpf, ver 1.0 Sep 11 23:30:55.743775 kernel: nicvf, ver 1.0 Sep 11 23:30:55.743843 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 23:30:55.743910 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T23:30:55 UTC (1757633455) Sep 11 23:30:55.743919 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 23:30:55.743926 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 23:30:55.743935 kernel: watchdog: NMI not fully supported Sep 11 23:30:55.743941 kernel: watchdog: Hard watchdog permanently disabled Sep 11 23:30:55.743948 kernel: NET: Registered PF_INET6 protocol family Sep 11 23:30:55.743955 kernel: Segment Routing with IPv6 Sep 11 23:30:55.743962 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 23:30:55.743969 kernel: NET: Registered PF_PACKET protocol family Sep 11 23:30:55.743976 kernel: Key type dns_resolver registered Sep 11 23:30:55.743983 kernel: registered taskstats version 1 Sep 11 23:30:55.743990 kernel: Loading compiled-in X.509 certificates Sep 11 23:30:55.743998 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 586029c251a407eb16ec614b204f62df0d61537f' Sep 11 23:30:55.744005 kernel: Demotion targets for Node 0: null Sep 11 23:30:55.744012 kernel: Key type .fscrypt registered Sep 11 23:30:55.744019 kernel: Key type fscrypt-provisioning registered Sep 11 23:30:55.744026 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 23:30:55.744033 kernel: ima: Allocated hash algorithm: sha1 Sep 11 23:30:55.744040 kernel: ima: No architecture policies found Sep 11 23:30:55.744047 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 23:30:55.744055 kernel: clk: Disabling unused clocks Sep 11 23:30:55.744062 kernel: PM: genpd: Disabling unused power domains Sep 11 23:30:55.744069 kernel: Warning: unable to open an initial console. Sep 11 23:30:55.744076 kernel: Freeing unused kernel memory: 38976K Sep 11 23:30:55.744082 kernel: Run /init as init process Sep 11 23:30:55.744089 kernel: with arguments: Sep 11 23:30:55.744096 kernel: /init Sep 11 23:30:55.744102 kernel: with environment: Sep 11 23:30:55.744109 kernel: HOME=/ Sep 11 23:30:55.744116 kernel: TERM=linux Sep 11 23:30:55.744123 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 23:30:55.744131 systemd[1]: Successfully made /usr/ read-only. Sep 11 23:30:55.744141 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:30:55.744149 systemd[1]: Detected virtualization kvm. Sep 11 23:30:55.744156 systemd[1]: Detected architecture arm64. Sep 11 23:30:55.744163 systemd[1]: Running in initrd. Sep 11 23:30:55.744170 systemd[1]: No hostname configured, using default hostname. Sep 11 23:30:55.744179 systemd[1]: Hostname set to . Sep 11 23:30:55.744186 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:30:55.744194 systemd[1]: Queued start job for default target initrd.target. Sep 11 23:30:55.744210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:30:55.744218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:30:55.744225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 23:30:55.744233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:30:55.744241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 23:30:55.744251 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 23:30:55.744259 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 23:30:55.744267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 23:30:55.744275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:30:55.744282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:30:55.744289 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:30:55.744296 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:30:55.744305 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:30:55.744321 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:30:55.744329 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:30:55.744337 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:30:55.744344 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 23:30:55.744352 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 23:30:55.744359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:30:55.744367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:30:55.744376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:30:55.744384 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:30:55.744392 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 23:30:55.744400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:30:55.744407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 23:30:55.744415 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 23:30:55.744423 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 23:30:55.744431 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:30:55.744438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:30:55.744449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:30:55.744461 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 23:30:55.744469 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:30:55.744479 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 23:30:55.744491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:30:55.744534 systemd-journald[244]: Collecting audit messages is disabled. Sep 11 23:30:55.744554 systemd-journald[244]: Journal started Sep 11 23:30:55.744574 systemd-journald[244]: Runtime Journal (/run/log/journal/e4cf919a101842da8c494e0a0b3357dc) is 6M, max 48.5M, 42.4M free. Sep 11 23:30:55.748437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 23:30:55.748468 kernel: Bridge firewalling registered Sep 11 23:30:55.734431 systemd-modules-load[245]: Inserted module 'overlay' Sep 11 23:30:55.748173 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 11 23:30:55.752335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:30:55.752360 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:30:55.754528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:30:55.755825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:30:55.761029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 23:30:55.762647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:30:55.764855 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:30:55.770906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:30:55.778145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:30:55.781634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:30:55.783922 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 23:30:55.788300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:30:55.790104 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 23:30:55.791853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:30:55.794488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:30:55.811922 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=482086cf30ef24f68ac7a1ade8cef289f4704fd240e7f8a80dce8eef21953880 Sep 11 23:30:55.826072 systemd-resolved[292]: Positive Trust Anchors: Sep 11 23:30:55.826087 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:30:55.826119 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:30:55.830756 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 11 23:30:55.831883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:30:55.833629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:30:55.892363 kernel: SCSI subsystem initialized Sep 11 23:30:55.897333 kernel: Loading iSCSI transport class v2.0-870. Sep 11 23:30:55.905342 kernel: iscsi: registered transport (tcp) Sep 11 23:30:55.917334 kernel: iscsi: registered transport (qla4xxx) Sep 11 23:30:55.917353 kernel: QLogic iSCSI HBA Driver Sep 11 23:30:55.936563 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:30:55.958148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:30:55.960153 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:30:56.008354 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 23:30:56.010469 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 23:30:56.070376 kernel: raid6: neonx8 gen() 15640 MB/s Sep 11 23:30:56.087345 kernel: raid6: neonx4 gen() 15694 MB/s Sep 11 23:30:56.104341 kernel: raid6: neonx2 gen() 13207 MB/s Sep 11 23:30:56.121351 kernel: raid6: neonx1 gen() 10497 MB/s Sep 11 23:30:56.138347 kernel: raid6: int64x8 gen() 6884 MB/s Sep 11 23:30:56.155366 kernel: raid6: int64x4 gen() 7312 MB/s Sep 11 23:30:56.172346 kernel: raid6: int64x2 gen() 6087 MB/s Sep 11 23:30:56.189357 kernel: raid6: int64x1 gen() 5022 MB/s Sep 11 23:30:56.189409 kernel: raid6: using algorithm neonx4 gen() 15694 MB/s Sep 11 23:30:56.206358 kernel: raid6: .... xor() 12277 MB/s, rmw enabled Sep 11 23:30:56.206402 kernel: raid6: using neon recovery algorithm Sep 11 23:30:56.211343 kernel: xor: measuring software checksum speed Sep 11 23:30:56.211395 kernel: 8regs : 19110 MB/sec Sep 11 23:30:56.212340 kernel: 32regs : 21681 MB/sec Sep 11 23:30:56.213333 kernel: arm64_neon : 25037 MB/sec Sep 11 23:30:56.213348 kernel: xor: using function: arm64_neon (25037 MB/sec) Sep 11 23:30:56.265358 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 23:30:56.271152 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:30:56.274474 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:30:56.301709 systemd-udevd[502]: Using default interface naming scheme 'v255'. Sep 11 23:30:56.305841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:30:56.307580 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 23:30:56.330578 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Sep 11 23:30:56.355291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:30:56.358437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:30:56.414843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:30:56.417022 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 23:30:56.466079 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 23:30:56.472821 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 23:30:56.473457 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 23:30:56.473473 kernel: GPT:9289727 != 19775487 Sep 11 23:30:56.473482 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 23:30:56.474333 kernel: GPT:9289727 != 19775487 Sep 11 23:30:56.474366 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 23:30:56.475608 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:30:56.481523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:30:56.481609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:30:56.484857 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:30:56.486836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:30:56.515817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 23:30:56.517098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:30:56.519511 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 23:30:56.536235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 23:30:56.542587 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 23:30:56.543597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 23:30:56.553796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:30:56.555020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:30:56.557062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:30:56.558974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:30:56.561294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 23:30:56.562864 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 23:30:56.583014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:30:56.612869 disk-uuid[594]: Primary Header is updated. Sep 11 23:30:56.612869 disk-uuid[594]: Secondary Entries is updated. Sep 11 23:30:56.612869 disk-uuid[594]: Secondary Header is updated. Sep 11 23:30:56.616333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:30:57.623231 disk-uuid[602]: The operation has completed successfully. Sep 11 23:30:57.624233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 23:30:57.657631 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 23:30:57.657770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 23:30:57.681737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 23:30:57.707888 sh[613]: Success Sep 11 23:30:57.719643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 23:30:57.719680 kernel: device-mapper: uevent: version 1.0.3 Sep 11 23:30:57.720506 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 23:30:57.727339 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 23:30:57.755943 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 23:30:57.758530 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 23:30:57.774135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 23:30:57.780616 kernel: BTRFS: device fsid b46dc80b-5663-423a-b9f6-4361968007e2 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) Sep 11 23:30:57.780656 kernel: BTRFS info (device dm-0): first mount of filesystem b46dc80b-5663-423a-b9f6-4361968007e2 Sep 11 23:30:57.780667 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:30:57.785335 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 23:30:57.785361 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 23:30:57.786103 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 23:30:57.787186 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:30:57.788159 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 23:30:57.788928 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 23:30:57.791574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 23:30:57.821599 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 11 23:30:57.821648 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:30:57.821659 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:30:57.824380 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:30:57.824412 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:30:57.828345 kernel: BTRFS info (device vda6): last unmount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:30:57.830062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 23:30:57.831823 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 23:30:57.893927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:30:57.897441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:30:57.932401 systemd-networkd[799]: lo: Link UP Sep 11 23:30:57.933110 systemd-networkd[799]: lo: Gained carrier Sep 11 23:30:57.934504 systemd-networkd[799]: Enumeration completed Sep 11 23:30:57.934601 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:30:57.935551 systemd[1]: Reached target network.target - Network. Sep 11 23:30:57.936864 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:30:57.936867 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:30:57.937335 systemd-networkd[799]: eth0: Link UP Sep 11 23:30:57.941703 ignition[703]: Ignition 2.21.0 Sep 11 23:30:57.937618 systemd-networkd[799]: eth0: Gained carrier Sep 11 23:30:57.941710 ignition[703]: Stage: fetch-offline Sep 11 23:30:57.937628 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:30:57.941741 ignition[703]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:30:57.941748 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:30:57.941898 ignition[703]: parsed url from cmdline: "" Sep 11 23:30:57.941900 ignition[703]: no config URL provided Sep 11 23:30:57.941905 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 23:30:57.941911 ignition[703]: no config at "/usr/lib/ignition/user.ign" Sep 11 23:30:57.941929 ignition[703]: op(1): [started] loading QEMU firmware config module Sep 11 23:30:57.941933 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 23:30:57.947742 ignition[703]: op(1): [finished] loading QEMU firmware config module Sep 11 23:30:57.954387 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:30:57.993432 ignition[703]: parsing config with SHA512: 2b7f78d28b213e9e209967f5775942f52e4f52ebb11dbd3895619a07157ed224134c0c06889f2d56db065a88783f42a89c5cd2b68d4f90bd44fb0947ef5f89fe Sep 11 23:30:57.998117 unknown[703]: fetched base config from "system" Sep 11 23:30:57.998129 unknown[703]: fetched user config from "qemu" Sep 11 23:30:57.998761 ignition[703]: fetch-offline: fetch-offline passed Sep 11 23:30:57.998239 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.12 Sep 11 23:30:57.998821 ignition[703]: Ignition finished successfully Sep 11 23:30:57.998247 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 11 23:30:58.000059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:30:58.001576 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 23:30:58.002531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 23:30:58.030884 ignition[813]: Ignition 2.21.0 Sep 11 23:30:58.030898 ignition[813]: Stage: kargs Sep 11 23:30:58.031257 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:30:58.031267 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:30:58.033027 ignition[813]: kargs: kargs passed Sep 11 23:30:58.033095 ignition[813]: Ignition finished successfully Sep 11 23:30:58.034972 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 23:30:58.037237 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 23:30:58.075655 ignition[821]: Ignition 2.21.0 Sep 11 23:30:58.075674 ignition[821]: Stage: disks Sep 11 23:30:58.075808 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 11 23:30:58.075817 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:30:58.076520 ignition[821]: disks: disks passed Sep 11 23:30:58.076563 ignition[821]: Ignition finished successfully Sep 11 23:30:58.078645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 23:30:58.079616 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 23:30:58.080725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 23:30:58.082201 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:30:58.083609 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:30:58.085178 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:30:58.087223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 23:30:58.116154 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 23:30:58.119805 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 23:30:58.121964 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 23:30:58.178334 kernel: EXT4-fs (vda9): mounted filesystem f6e22e61-f8f0-470c-befd-91d703c5ae2a r/w with ordered data mode. Quota mode: none. Sep 11 23:30:58.179181 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 23:30:58.180281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 23:30:58.182145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:30:58.183577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 23:30:58.184390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 23:30:58.184426 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 23:30:58.184447 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:30:58.193627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 23:30:58.195883 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 23:30:58.198335 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 11 23:30:58.200339 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:30:58.200368 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:30:58.202563 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:30:58.202580 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:30:58.204602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:30:58.227335 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 23:30:58.230268 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 11 23:30:58.233180 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 23:30:58.236959 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 23:30:58.301869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 23:30:58.303747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 23:30:58.305098 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 23:30:58.324368 kernel: BTRFS info (device vda6): last unmount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:30:58.332191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 23:30:58.344087 ignition[954]: INFO : Ignition 2.21.0 Sep 11 23:30:58.344087 ignition[954]: INFO : Stage: mount Sep 11 23:30:58.345511 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:30:58.345511 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:30:58.348195 ignition[954]: INFO : mount: mount passed Sep 11 23:30:58.348195 ignition[954]: INFO : Ignition finished successfully Sep 11 23:30:58.348934 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 23:30:58.351098 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 23:30:58.779629 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 23:30:58.783561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 23:30:58.811167 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 11 23:30:58.811207 kernel: BTRFS info (device vda6): first mount of filesystem 5a1cfa59-baaf-486a-8806-213f62c7400a Sep 11 23:30:58.811217 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 23:30:58.814342 kernel: BTRFS info (device vda6): turning on async discard Sep 11 23:30:58.814372 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 23:30:58.816154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 23:30:58.846244 ignition[982]: INFO : Ignition 2.21.0 Sep 11 23:30:58.846244 ignition[982]: INFO : Stage: files Sep 11 23:30:58.847557 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:30:58.847557 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:30:58.847557 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 11 23:30:58.853561 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 23:30:58.853561 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 23:30:58.857198 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 23:30:58.858377 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 23:30:58.858377 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 23:30:58.857772 unknown[982]: wrote ssh authorized keys file for user: core Sep 11 23:30:58.861371 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 11 23:30:58.861371 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 11 23:30:59.002456 systemd-networkd[799]: eth0: Gained IPv6LL Sep 11 23:31:00.856971 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 23:31:01.783028 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:31:01.786768 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 23:31:01.805736 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:31:01.807470 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 23:31:01.807470 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:31:01.811530 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:31:01.811530 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:31:01.815085 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 11 23:31:02.297598 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 23:31:03.442431 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 11 23:31:03.442431 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 23:31:03.445539 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:31:03.468807 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 23:31:03.468807 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 23:31:03.468807 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 23:31:03.473770 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:31:03.473770 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 23:31:03.473770 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 23:31:03.473770 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 23:31:03.488303 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:31:03.491897 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 23:31:03.494389 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 23:31:03.494389 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 23:31:03.494389 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 23:31:03.494389 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:31:03.494389 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 23:31:03.494389 ignition[982]: INFO : files: files passed Sep 11 23:31:03.494389 ignition[982]: INFO : Ignition finished successfully Sep 11 23:31:03.495761 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 23:31:03.499742 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 23:31:03.519661 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 23:31:03.523269 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 23:31:03.523408 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 23:31:03.528982 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 23:31:03.532240 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:31:03.532240 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:31:03.535160 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 23:31:03.535866 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:31:03.538550 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 23:31:03.540218 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 23:31:03.590179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 23:31:03.591110 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 23:31:03.592549 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 23:31:03.594076 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 23:31:03.595459 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 23:31:03.596353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 23:31:03.612350 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:31:03.614636 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 23:31:03.643508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:31:03.644468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:31:03.646054 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 23:31:03.647413 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 23:31:03.647535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 23:31:03.649519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 23:31:03.651024 systemd[1]: Stopped target basic.target - Basic System. Sep 11 23:31:03.652233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 23:31:03.653553 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 23:31:03.655048 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 23:31:03.656516 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 23:31:03.658026 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 23:31:03.659455 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 23:31:03.660989 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 23:31:03.662585 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 23:31:03.663973 systemd[1]: Stopped target swap.target - Swaps. Sep 11 23:31:03.665146 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 23:31:03.665266 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 23:31:03.667120 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:31:03.668735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:31:03.670255 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 23:31:03.670337 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:31:03.672092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 23:31:03.672203 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 23:31:03.674269 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 23:31:03.674388 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 23:31:03.675887 systemd[1]: Stopped target paths.target - Path Units. Sep 11 23:31:03.677802 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 23:31:03.682351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:31:03.683417 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 23:31:03.685233 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 23:31:03.686473 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 23:31:03.686549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 23:31:03.687705 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 23:31:03.687781 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 23:31:03.689538 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 23:31:03.689649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 23:31:03.691870 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 23:31:03.691967 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 23:31:03.693930 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 23:31:03.695880 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 23:31:03.696761 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 23:31:03.696878 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:31:03.698201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 23:31:03.698287 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 23:31:03.702809 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 23:31:03.708447 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 23:31:03.716129 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 23:31:03.720194 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 23:31:03.720306 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 23:31:03.722222 ignition[1038]: INFO : Ignition 2.21.0 Sep 11 23:31:03.722222 ignition[1038]: INFO : Stage: umount Sep 11 23:31:03.722222 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 23:31:03.722222 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 23:31:03.722222 ignition[1038]: INFO : umount: umount passed Sep 11 23:31:03.722222 ignition[1038]: INFO : Ignition finished successfully Sep 11 23:31:03.723264 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 23:31:03.723379 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 23:31:03.724243 systemd[1]: Stopped target network.target - Network. Sep 11 23:31:03.726268 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 23:31:03.726337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 23:31:03.728396 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 23:31:03.728441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 23:31:03.729605 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 23:31:03.729645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 23:31:03.730854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 23:31:03.730889 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 23:31:03.732211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 23:31:03.732253 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 23:31:03.733820 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 23:31:03.735052 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 23:31:03.738898 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 23:31:03.738980 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 23:31:03.741909 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 23:31:03.742397 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 23:31:03.742477 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:31:03.745715 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 23:31:03.745937 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 23:31:03.746037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 23:31:03.748725 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 23:31:03.749132 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 23:31:03.750591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 23:31:03.750642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:31:03.753678 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 23:31:03.754876 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 23:31:03.754931 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 23:31:03.756472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 23:31:03.756512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:31:03.758709 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 23:31:03.758751 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 23:31:03.760221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:31:03.762224 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 23:31:03.769880 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 23:31:03.770012 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:31:03.771538 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 23:31:03.771577 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 23:31:03.772778 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 23:31:03.772806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:31:03.774260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 23:31:03.774300 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 23:31:03.776342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 23:31:03.776381 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 23:31:03.778597 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 23:31:03.778647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 23:31:03.781749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 23:31:03.783151 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 23:31:03.783204 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:31:03.785656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 23:31:03.785720 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:31:03.788124 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 23:31:03.788183 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:31:03.791220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 23:31:03.791259 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:31:03.793093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 23:31:03.793141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:31:03.796191 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 23:31:03.797456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 23:31:03.802372 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 23:31:03.802485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 23:31:03.804184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 23:31:03.806222 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 23:31:03.841493 systemd[1]: Switching root. Sep 11 23:31:03.887051 systemd-journald[244]: Journal stopped Sep 11 23:31:04.599751 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 11 23:31:04.599805 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 23:31:04.599820 kernel: SELinux: policy capability open_perms=1 Sep 11 23:31:04.599830 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 23:31:04.599839 kernel: SELinux: policy capability always_check_network=0 Sep 11 23:31:04.599849 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 23:31:04.599862 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 23:31:04.599872 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 23:31:04.599880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 23:31:04.599889 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 23:31:04.599899 kernel: audit: type=1403 audit(1757633464.023:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 23:31:04.599913 systemd[1]: Successfully loaded SELinux policy in 49.775ms. Sep 11 23:31:04.599932 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.640ms. Sep 11 23:31:04.599944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 23:31:04.599955 systemd[1]: Detected virtualization kvm. Sep 11 23:31:04.599965 systemd[1]: Detected architecture arm64. Sep 11 23:31:04.599976 systemd[1]: Detected first boot. Sep 11 23:31:04.599986 systemd[1]: Initializing machine ID from VM UUID. Sep 11 23:31:04.599995 kernel: NET: Registered PF_VSOCK protocol family Sep 11 23:31:04.600005 zram_generator::config[1085]: No configuration found. Sep 11 23:31:04.600016 systemd[1]: Populated /etc with preset unit settings. Sep 11 23:31:04.600028 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 23:31:04.600038 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 23:31:04.600048 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 23:31:04.600057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 23:31:04.600067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 23:31:04.600079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 23:31:04.600089 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 23:31:04.600099 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 23:31:04.600111 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 23:31:04.600121 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 23:31:04.600132 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 23:31:04.600142 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 23:31:04.600152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 23:31:04.600163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 23:31:04.600174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 23:31:04.600187 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 23:31:04.600211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 23:31:04.600224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 23:31:04.600234 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 23:31:04.600244 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 23:31:04.600254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 23:31:04.600265 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 23:31:04.600275 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 23:31:04.600287 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 23:31:04.600299 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 23:31:04.600323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 23:31:04.600335 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 23:31:04.600346 systemd[1]: Reached target slices.target - Slice Units. Sep 11 23:31:04.600355 systemd[1]: Reached target swap.target - Swaps. Sep 11 23:31:04.600365 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 23:31:04.600375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 23:31:04.600385 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 23:31:04.600395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 23:31:04.600405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 23:31:04.600417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 23:31:04.600427 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 23:31:04.600437 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 23:31:04.600447 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 23:31:04.600457 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 23:31:04.600467 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 23:31:04.600477 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 23:31:04.600487 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 23:31:04.600498 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 23:31:04.600509 systemd[1]: Reached target machines.target - Containers. Sep 11 23:31:04.600519 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 23:31:04.600529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:31:04.600540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 23:31:04.600551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 23:31:04.600560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:31:04.600571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:31:04.600582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:31:04.600593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 23:31:04.600603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:31:04.600613 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 23:31:04.600624 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 23:31:04.600634 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 23:31:04.600644 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 23:31:04.600654 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 23:31:04.600669 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:31:04.600682 kernel: loop: module loaded Sep 11 23:31:04.600691 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 23:31:04.600701 kernel: fuse: init (API version 7.41) Sep 11 23:31:04.600711 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 23:31:04.600720 kernel: ACPI: bus type drm_connector registered Sep 11 23:31:04.600730 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 23:31:04.600741 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 23:31:04.600750 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 23:31:04.600760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 23:31:04.600772 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 23:31:04.600782 systemd[1]: Stopped verity-setup.service. Sep 11 23:31:04.600811 systemd-journald[1149]: Collecting audit messages is disabled. Sep 11 23:31:04.600833 systemd-journald[1149]: Journal started Sep 11 23:31:04.600855 systemd-journald[1149]: Runtime Journal (/run/log/journal/e4cf919a101842da8c494e0a0b3357dc) is 6M, max 48.5M, 42.4M free. Sep 11 23:31:04.401989 systemd[1]: Queued start job for default target multi-user.target. Sep 11 23:31:04.425307 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 23:31:04.425708 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 23:31:04.603164 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 23:31:04.604133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 23:31:04.605216 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 23:31:04.606364 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 23:31:04.607539 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 23:31:04.608573 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 23:31:04.609889 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 23:31:04.611233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 23:31:04.612645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 23:31:04.612841 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 23:31:04.614153 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 23:31:04.615493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:31:04.615659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:31:04.616827 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:31:04.616989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:31:04.618126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:31:04.618287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:31:04.620073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 23:31:04.620566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 23:31:04.621752 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:31:04.621931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:31:04.623501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 23:31:04.624783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 23:31:04.626256 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 23:31:04.627619 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 23:31:04.640031 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 23:31:04.642449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 23:31:04.644260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 23:31:04.645276 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 23:31:04.645322 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 23:31:04.647078 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 23:31:04.657172 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 23:31:04.658939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:31:04.660307 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 23:31:04.662345 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 23:31:04.663369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:31:04.666068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 23:31:04.667149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:31:04.668554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 23:31:04.670901 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 23:31:04.675262 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 23:31:04.680478 systemd-journald[1149]: Time spent on flushing to /var/log/journal/e4cf919a101842da8c494e0a0b3357dc is 26.599ms for 889 entries. Sep 11 23:31:04.680478 systemd-journald[1149]: System Journal (/var/log/journal/e4cf919a101842da8c494e0a0b3357dc) is 8M, max 195.6M, 187.6M free. Sep 11 23:31:04.719142 systemd-journald[1149]: Received client request to flush runtime journal. Sep 11 23:31:04.719193 kernel: loop0: detected capacity change from 0 to 138376 Sep 11 23:31:04.719209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 23:31:04.679169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 23:31:04.680681 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 23:31:04.683940 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 23:31:04.690644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 23:31:04.695069 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 23:31:04.701375 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 23:31:04.708267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 23:31:04.721011 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 11 23:31:04.721022 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Sep 11 23:31:04.723393 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 23:31:04.726767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 23:31:04.732342 kernel: loop1: detected capacity change from 0 to 107312 Sep 11 23:31:04.732507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 23:31:04.741712 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 23:31:04.765359 kernel: loop2: detected capacity change from 0 to 203944 Sep 11 23:31:04.769977 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 23:31:04.772937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 23:31:04.796350 kernel: loop3: detected capacity change from 0 to 138376 Sep 11 23:31:04.801273 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 11 23:31:04.801608 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 11 23:31:04.804357 kernel: loop4: detected capacity change from 0 to 107312 Sep 11 23:31:04.806288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 23:31:04.812588 kernel: loop5: detected capacity change from 0 to 203944 Sep 11 23:31:04.817781 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 23:31:04.818185 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 11 23:31:04.823617 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 23:31:04.823632 systemd[1]: Reloading... Sep 11 23:31:04.881454 zram_generator::config[1252]: No configuration found. Sep 11 23:31:04.946764 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 23:31:04.967531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:31:05.031078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 23:31:05.031227 systemd[1]: Reloading finished in 207 ms. Sep 11 23:31:05.058843 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 23:31:05.060108 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 23:31:05.076678 systemd[1]: Starting ensure-sysext.service... Sep 11 23:31:05.078384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 23:31:05.087015 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 11 23:31:05.087030 systemd[1]: Reloading... Sep 11 23:31:05.094845 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 23:31:05.095166 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 23:31:05.095544 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 23:31:05.095845 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 23:31:05.096558 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 23:31:05.096877 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 11 23:31:05.096990 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 11 23:31:05.099652 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:31:05.099759 systemd-tmpfiles[1287]: Skipping /boot Sep 11 23:31:05.108655 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 23:31:05.108807 systemd-tmpfiles[1287]: Skipping /boot Sep 11 23:31:05.138341 zram_generator::config[1314]: No configuration found. Sep 11 23:31:05.205404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:31:05.267040 systemd[1]: Reloading finished in 179 ms. Sep 11 23:31:05.290897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 23:31:05.296357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 23:31:05.301889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:31:05.304203 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 23:31:05.312229 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 23:31:05.315408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 23:31:05.317869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 23:31:05.323674 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 23:31:05.335706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 23:31:05.340138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:31:05.341690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:31:05.345633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:31:05.351525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:31:05.352577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:31:05.352756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:31:05.355395 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 23:31:05.357100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:31:05.357364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:31:05.359295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:31:05.359493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:31:05.361162 augenrules[1380]: No rules Sep 11 23:31:05.363000 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:31:05.363184 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:31:05.371979 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Sep 11 23:31:05.374715 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 23:31:05.378365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 23:31:05.380469 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:31:05.380716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:31:05.387656 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 23:31:05.390291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:31:05.391210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 23:31:05.399242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 23:31:05.402279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 23:31:05.404287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 23:31:05.406162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 23:31:05.407136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 23:31:05.407244 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 23:31:05.410163 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 23:31:05.411149 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 23:31:05.412172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 23:31:05.413877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 23:31:05.415779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 23:31:05.419139 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 23:31:05.419304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 23:31:05.421907 systemd[1]: Finished ensure-sysext.service. Sep 11 23:31:05.422928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 23:31:05.423098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 23:31:05.424441 augenrules[1396]: /sbin/augenrules: No change Sep 11 23:31:05.424495 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 23:31:05.424634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 23:31:05.431206 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 23:31:05.435329 augenrules[1449]: No rules Sep 11 23:31:05.436231 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:31:05.436476 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:31:05.448558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 23:31:05.449448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 23:31:05.449515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 23:31:05.453099 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 23:31:05.474647 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 23:31:05.553644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 23:31:05.557544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 23:31:05.567348 systemd-networkd[1458]: lo: Link UP Sep 11 23:31:05.567357 systemd-networkd[1458]: lo: Gained carrier Sep 11 23:31:05.568420 systemd-networkd[1458]: Enumeration completed Sep 11 23:31:05.568524 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 23:31:05.569110 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:31:05.569121 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 23:31:05.569784 systemd-networkd[1458]: eth0: Link UP Sep 11 23:31:05.569902 systemd-networkd[1458]: eth0: Gained carrier Sep 11 23:31:05.569918 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 23:31:05.570882 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 23:31:05.575244 systemd-resolved[1354]: Positive Trust Anchors: Sep 11 23:31:05.575266 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 23:31:05.575298 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 23:31:05.575959 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 23:31:05.584916 systemd-resolved[1354]: Defaulting to hostname 'linux'. Sep 11 23:31:05.585500 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 23:31:05.588422 systemd-networkd[1458]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 23:31:05.590773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 23:31:05.591936 systemd[1]: Reached target network.target - Network. Sep 11 23:31:05.592993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 23:31:05.596144 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 23:31:05.115334 systemd-resolved[1354]: Clock change detected. Flushing caches. Sep 11 23:31:05.123287 systemd-journald[1149]: Time jumped backwards, rotating. Sep 11 23:31:05.115523 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 23:31:05.116706 systemd-timesyncd[1459]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 23:31:05.116760 systemd-timesyncd[1459]: Initial clock synchronization to Thu 2025-09-11 23:31:05.115274 UTC. Sep 11 23:31:05.117516 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 23:31:05.118751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 23:31:05.120226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 23:31:05.121482 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 23:31:05.121512 systemd[1]: Reached target paths.target - Path Units. Sep 11 23:31:05.122432 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 23:31:05.123645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 23:31:05.125075 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 23:31:05.127360 systemd[1]: Reached target timers.target - Timer Units. Sep 11 23:31:05.131248 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 23:31:05.133925 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 23:31:05.136972 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 23:31:05.138269 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 23:31:05.139265 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 23:31:05.150372 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 23:31:05.151705 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 23:31:05.153535 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 23:31:05.154705 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 23:31:05.156024 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 23:31:05.156837 systemd[1]: Reached target basic.target - Basic System. Sep 11 23:31:05.157672 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:31:05.157702 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 23:31:05.158871 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 23:31:05.161369 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 23:31:05.164611 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 23:31:05.176272 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 23:31:05.179704 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 23:31:05.180627 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 23:31:05.183427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 23:31:05.188274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 23:31:05.191404 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 23:31:05.193678 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 23:31:05.196428 extend-filesystems[1497]: Found /dev/vda6 Sep 11 23:31:05.198438 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 23:31:05.200479 extend-filesystems[1497]: Found /dev/vda9 Sep 11 23:31:05.200589 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 23:31:05.200980 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 23:31:05.204195 jq[1496]: false Sep 11 23:31:05.204342 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 23:31:05.204763 extend-filesystems[1497]: Checking size of /dev/vda9 Sep 11 23:31:05.206113 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 23:31:05.209463 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 23:31:05.210694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 23:31:05.210873 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 23:31:05.211704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 23:31:05.211922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 23:31:05.218169 jq[1512]: true Sep 11 23:31:05.227183 tar[1518]: linux-arm64/helm Sep 11 23:31:05.233426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 23:31:05.237826 jq[1523]: true Sep 11 23:31:05.251771 update_engine[1509]: I20250911 23:31:05.251617 1509 main.cc:92] Flatcar Update Engine starting Sep 11 23:31:05.253538 dbus-daemon[1493]: [system] SELinux support is enabled Sep 11 23:31:05.253994 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 23:31:05.258099 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 23:31:05.258437 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 23:31:05.258469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 23:31:05.258652 update_engine[1509]: I20250911 23:31:05.258592 1509 update_check_scheduler.cc:74] Next update check in 6m39s Sep 11 23:31:05.259796 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 23:31:05.259822 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 23:31:05.261364 systemd[1]: Started update-engine.service - Update Engine. Sep 11 23:31:05.265779 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 23:31:05.266903 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 23:31:05.268259 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 23:31:05.269533 extend-filesystems[1497]: Resized partition /dev/vda9 Sep 11 23:31:05.271926 extend-filesystems[1543]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 23:31:05.286260 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 23:31:05.315187 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 23:31:05.338144 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 23:31:05.338144 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 23:31:05.338144 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 23:31:05.344143 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Sep 11 23:31:05.339716 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 23:31:05.344296 extend-filesystems[1497]: Resized filesystem in /dev/vda9 Sep 11 23:31:05.344525 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 23:31:05.344745 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 23:31:05.379066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 23:31:05.384666 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 23:31:05.405096 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 23:31:05.433089 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 23:31:05.433312 systemd-logind[1506]: New seat seat0. Sep 11 23:31:05.435346 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 23:31:05.472454 containerd[1535]: time="2025-09-11T23:31:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 23:31:05.475536 containerd[1535]: time="2025-09-11T23:31:05.475469174Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.486833254Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.486874294Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.486892214Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.487062574Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.487080734Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 23:31:05.487181 containerd[1535]: time="2025-09-11T23:31:05.487103454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487361 containerd[1535]: time="2025-09-11T23:31:05.487337654Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487403 containerd[1535]: time="2025-09-11T23:31:05.487392294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487707 containerd[1535]: time="2025-09-11T23:31:05.487680814Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487770 containerd[1535]: time="2025-09-11T23:31:05.487757054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487815 containerd[1535]: time="2025-09-11T23:31:05.487803174Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487867 containerd[1535]: time="2025-09-11T23:31:05.487855014Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 23:31:05.487996 containerd[1535]: time="2025-09-11T23:31:05.487978414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 23:31:05.488302 containerd[1535]: time="2025-09-11T23:31:05.488275894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:31:05.488404 containerd[1535]: time="2025-09-11T23:31:05.488388414Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 23:31:05.488449 containerd[1535]: time="2025-09-11T23:31:05.488438094Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 23:31:05.488535 containerd[1535]: time="2025-09-11T23:31:05.488520334Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 23:31:05.488854 containerd[1535]: time="2025-09-11T23:31:05.488820214Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 23:31:05.488940 containerd[1535]: time="2025-09-11T23:31:05.488923334Z" level=info msg="metadata content store policy set" policy=shared Sep 11 23:31:05.493107 containerd[1535]: time="2025-09-11T23:31:05.493067574Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 23:31:05.493159 containerd[1535]: time="2025-09-11T23:31:05.493123534Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 23:31:05.493159 containerd[1535]: time="2025-09-11T23:31:05.493140214Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 23:31:05.493196 containerd[1535]: time="2025-09-11T23:31:05.493171094Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 23:31:05.493214 containerd[1535]: time="2025-09-11T23:31:05.493203334Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 23:31:05.493230 containerd[1535]: time="2025-09-11T23:31:05.493215454Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 23:31:05.493265 containerd[1535]: time="2025-09-11T23:31:05.493231494Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 23:31:05.493265 containerd[1535]: time="2025-09-11T23:31:05.493243654Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 23:31:05.493265 containerd[1535]: time="2025-09-11T23:31:05.493255014Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 23:31:05.493308 containerd[1535]: time="2025-09-11T23:31:05.493264854Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 23:31:05.493308 containerd[1535]: time="2025-09-11T23:31:05.493274174Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 23:31:05.493308 containerd[1535]: time="2025-09-11T23:31:05.493289654Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493531494Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493578054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493600134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493612134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493627414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493641854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493660574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493675294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493690974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493749054Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493794654Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.493994414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.494046734Z" level=info msg="Start snapshots syncer" Sep 11 23:31:05.494269 containerd[1535]: time="2025-09-11T23:31:05.494070214Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 23:31:05.496566 containerd[1535]: time="2025-09-11T23:31:05.496505134Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 23:31:05.496680 containerd[1535]: time="2025-09-11T23:31:05.496588614Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 23:31:05.496721 containerd[1535]: time="2025-09-11T23:31:05.496696454Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 23:31:05.496866 containerd[1535]: time="2025-09-11T23:31:05.496845734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 23:31:05.496902 containerd[1535]: time="2025-09-11T23:31:05.496885254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 23:31:05.496921 containerd[1535]: time="2025-09-11T23:31:05.496904454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 23:31:05.496938 containerd[1535]: time="2025-09-11T23:31:05.496919854Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 23:31:05.496955 containerd[1535]: time="2025-09-11T23:31:05.496934614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 23:31:05.497014 containerd[1535]: time="2025-09-11T23:31:05.496955494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 23:31:05.497014 containerd[1535]: time="2025-09-11T23:31:05.496970214Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 23:31:05.497014 containerd[1535]: time="2025-09-11T23:31:05.497004454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 23:31:05.497061 containerd[1535]: time="2025-09-11T23:31:05.497019814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 23:31:05.497061 containerd[1535]: time="2025-09-11T23:31:05.497034494Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 23:31:05.497099 containerd[1535]: time="2025-09-11T23:31:05.497080454Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:31:05.497119 containerd[1535]: time="2025-09-11T23:31:05.497094734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 23:31:05.497119 containerd[1535]: time="2025-09-11T23:31:05.497108814Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:31:05.497160 containerd[1535]: time="2025-09-11T23:31:05.497122534Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 23:31:05.497160 containerd[1535]: time="2025-09-11T23:31:05.497131214Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 23:31:05.497192 containerd[1535]: time="2025-09-11T23:31:05.497143814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 23:31:05.497234 containerd[1535]: time="2025-09-11T23:31:05.497217774Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 23:31:05.497407 containerd[1535]: time="2025-09-11T23:31:05.497394454Z" level=info msg="runtime interface created" Sep 11 23:31:05.497407 containerd[1535]: time="2025-09-11T23:31:05.497405014Z" level=info msg="created NRI interface" Sep 11 23:31:05.497454 containerd[1535]: time="2025-09-11T23:31:05.497414974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 23:31:05.497454 containerd[1535]: time="2025-09-11T23:31:05.497428894Z" level=info msg="Connect containerd service" Sep 11 23:31:05.497485 containerd[1535]: time="2025-09-11T23:31:05.497459054Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 23:31:05.498172 containerd[1535]: time="2025-09-11T23:31:05.498127094Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 23:31:05.591754 containerd[1535]: time="2025-09-11T23:31:05.591681734Z" level=info msg="Start subscribing containerd event" Sep 11 23:31:05.591754 containerd[1535]: time="2025-09-11T23:31:05.591754574Z" level=info msg="Start recovering state" Sep 11 23:31:05.591886 containerd[1535]: time="2025-09-11T23:31:05.591839014Z" level=info msg="Start event monitor" Sep 11 23:31:05.591886 containerd[1535]: time="2025-09-11T23:31:05.591854214Z" level=info msg="Start cni network conf syncer for default" Sep 11 23:31:05.591886 containerd[1535]: time="2025-09-11T23:31:05.591863054Z" level=info msg="Start streaming server" Sep 11 23:31:05.591886 containerd[1535]: time="2025-09-11T23:31:05.591879534Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 23:31:05.591886 containerd[1535]: time="2025-09-11T23:31:05.591886174Z" level=info msg="runtime interface starting up..." Sep 11 23:31:05.591961 containerd[1535]: time="2025-09-11T23:31:05.591892054Z" level=info msg="starting plugins..." Sep 11 23:31:05.591961 containerd[1535]: time="2025-09-11T23:31:05.591905214Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 23:31:05.593197 containerd[1535]: time="2025-09-11T23:31:05.592126294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 23:31:05.593197 containerd[1535]: time="2025-09-11T23:31:05.592205774Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 23:31:05.592370 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 23:31:05.593547 containerd[1535]: time="2025-09-11T23:31:05.593520294Z" level=info msg="containerd successfully booted in 0.121572s" Sep 11 23:31:05.640777 tar[1518]: linux-arm64/LICENSE Sep 11 23:31:05.640871 tar[1518]: linux-arm64/README.md Sep 11 23:31:05.663250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 23:31:05.729615 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 23:31:05.750036 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 23:31:05.752596 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 23:31:05.774055 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 23:31:05.774302 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 23:31:05.776866 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 23:31:05.805874 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 23:31:05.808772 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 23:31:05.811018 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 23:31:05.812389 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 23:31:06.264364 systemd-networkd[1458]: eth0: Gained IPv6LL Sep 11 23:31:06.269868 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 23:31:06.271424 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 23:31:06.275762 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 23:31:06.278641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:06.286424 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 23:31:06.303289 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 23:31:06.304406 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 23:31:06.307288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 23:31:06.309268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 23:31:06.885723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:06.887224 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 23:31:06.889019 systemd[1]: Startup finished in 2.018s (kernel) + 8.423s (initrd) + 3.398s (userspace) = 13.840s. Sep 11 23:31:06.890320 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:31:07.292099 kubelet[1633]: E0911 23:31:07.291988 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:31:07.294456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:31:07.294607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:31:07.294921 systemd[1]: kubelet.service: Consumed 790ms CPU time, 256.5M memory peak. Sep 11 23:31:07.511695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 23:31:07.514230 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:47984.service - OpenSSH per-connection server daemon (10.0.0.1:47984). Sep 11 23:31:07.592299 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 47984 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:07.595554 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:07.602592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 23:31:07.603805 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 23:31:07.609807 systemd-logind[1506]: New session 1 of user core. Sep 11 23:31:07.654223 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 23:31:07.658658 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 23:31:07.674230 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 23:31:07.677319 systemd-logind[1506]: New session c1 of user core. Sep 11 23:31:07.785755 systemd[1650]: Queued start job for default target default.target. Sep 11 23:31:07.808144 systemd[1650]: Created slice app.slice - User Application Slice. Sep 11 23:31:07.808206 systemd[1650]: Reached target paths.target - Paths. Sep 11 23:31:07.808246 systemd[1650]: Reached target timers.target - Timers. Sep 11 23:31:07.809556 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 23:31:07.819417 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 23:31:07.819599 systemd[1650]: Reached target sockets.target - Sockets. Sep 11 23:31:07.819713 systemd[1650]: Reached target basic.target - Basic System. Sep 11 23:31:07.819803 systemd[1650]: Reached target default.target - Main User Target. Sep 11 23:31:07.819878 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 23:31:07.819980 systemd[1650]: Startup finished in 136ms. Sep 11 23:31:07.821391 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 23:31:07.886859 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:48000.service - OpenSSH per-connection server daemon (10.0.0.1:48000). Sep 11 23:31:07.946089 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 48000 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:07.947596 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:07.952217 systemd-logind[1506]: New session 2 of user core. Sep 11 23:31:07.966333 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 23:31:08.020666 sshd[1663]: Connection closed by 10.0.0.1 port 48000 Sep 11 23:31:08.021082 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:08.035124 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:48000.service: Deactivated successfully. Sep 11 23:31:08.036941 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 23:31:08.037936 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. Sep 11 23:31:08.039994 systemd-logind[1506]: Removed session 2. Sep 11 23:31:08.042462 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Sep 11 23:31:08.102667 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:08.103954 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:08.108568 systemd-logind[1506]: New session 3 of user core. Sep 11 23:31:08.119405 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 23:31:08.169284 sshd[1671]: Connection closed by 10.0.0.1 port 48004 Sep 11 23:31:08.169217 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:08.189439 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:48004.service: Deactivated successfully. Sep 11 23:31:08.191115 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 23:31:08.191746 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. Sep 11 23:31:08.193804 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:48018.service - OpenSSH per-connection server daemon (10.0.0.1:48018). Sep 11 23:31:08.194628 systemd-logind[1506]: Removed session 3. Sep 11 23:31:08.254669 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 48018 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:08.256050 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:08.259838 systemd-logind[1506]: New session 4 of user core. Sep 11 23:31:08.266317 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 23:31:08.318874 sshd[1679]: Connection closed by 10.0.0.1 port 48018 Sep 11 23:31:08.319353 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:08.332364 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:48018.service: Deactivated successfully. Sep 11 23:31:08.335516 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 23:31:08.336733 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. Sep 11 23:31:08.338480 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:48028.service - OpenSSH per-connection server daemon (10.0.0.1:48028). Sep 11 23:31:08.339185 systemd-logind[1506]: Removed session 4. Sep 11 23:31:08.387937 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 48028 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:08.389354 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:08.393120 systemd-logind[1506]: New session 5 of user core. Sep 11 23:31:08.412345 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 23:31:08.468664 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 23:31:08.468923 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:31:08.492929 sudo[1688]: pam_unix(sudo:session): session closed for user root Sep 11 23:31:08.495910 sshd[1687]: Connection closed by 10.0.0.1 port 48028 Sep 11 23:31:08.495034 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:08.503298 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:48028.service: Deactivated successfully. Sep 11 23:31:08.504851 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 23:31:08.506756 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. Sep 11 23:31:08.509349 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). Sep 11 23:31:08.509831 systemd-logind[1506]: Removed session 5. Sep 11 23:31:08.566822 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:08.568331 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:08.572580 systemd-logind[1506]: New session 6 of user core. Sep 11 23:31:08.586371 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 23:31:08.638595 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 23:31:08.638870 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:31:08.662075 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 11 23:31:08.667060 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 23:31:08.667343 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:31:08.675333 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 23:31:08.716393 augenrules[1720]: No rules Sep 11 23:31:08.717564 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 23:31:08.717764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 23:31:08.718927 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 11 23:31:08.720293 sshd[1696]: Connection closed by 10.0.0.1 port 48038 Sep 11 23:31:08.720655 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:08.735395 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:48038.service: Deactivated successfully. Sep 11 23:31:08.736945 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 23:31:08.737597 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. Sep 11 23:31:08.740051 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:48044.service - OpenSSH per-connection server daemon (10.0.0.1:48044). Sep 11 23:31:08.740981 systemd-logind[1506]: Removed session 6. Sep 11 23:31:08.786384 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 48044 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:31:08.787808 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:31:08.791744 systemd-logind[1506]: New session 7 of user core. Sep 11 23:31:08.800316 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 23:31:08.851005 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 23:31:08.851308 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 23:31:09.151638 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 23:31:09.165686 (dockerd)[1752]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 23:31:09.384836 dockerd[1752]: time="2025-09-11T23:31:09.384771894Z" level=info msg="Starting up" Sep 11 23:31:09.386285 dockerd[1752]: time="2025-09-11T23:31:09.386247974Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 23:31:09.668512 dockerd[1752]: time="2025-09-11T23:31:09.668455454Z" level=info msg="Loading containers: start." Sep 11 23:31:09.676175 kernel: Initializing XFRM netlink socket Sep 11 23:31:09.892053 systemd-networkd[1458]: docker0: Link UP Sep 11 23:31:09.895874 dockerd[1752]: time="2025-09-11T23:31:09.895814294Z" level=info msg="Loading containers: done." Sep 11 23:31:09.911180 dockerd[1752]: time="2025-09-11T23:31:09.911104814Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 23:31:09.911320 dockerd[1752]: time="2025-09-11T23:31:09.911208494Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 23:31:09.911345 dockerd[1752]: time="2025-09-11T23:31:09.911322294Z" level=info msg="Initializing buildkit" Sep 11 23:31:09.938679 dockerd[1752]: time="2025-09-11T23:31:09.938570854Z" level=info msg="Completed buildkit initialization" Sep 11 23:31:09.945273 dockerd[1752]: time="2025-09-11T23:31:09.945212694Z" level=info msg="Daemon has completed initialization" Sep 11 23:31:09.945436 dockerd[1752]: time="2025-09-11T23:31:09.945323454Z" level=info msg="API listen on /run/docker.sock" Sep 11 23:31:09.946242 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 23:31:10.410607 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3706115603-merged.mount: Deactivated successfully. Sep 11 23:31:10.610315 containerd[1535]: time="2025-09-11T23:31:10.609874934Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 11 23:31:11.372904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313362594.mount: Deactivated successfully. Sep 11 23:31:12.857604 containerd[1535]: time="2025-09-11T23:31:12.857553694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:12.858294 containerd[1535]: time="2025-09-11T23:31:12.858259254Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 11 23:31:12.859549 containerd[1535]: time="2025-09-11T23:31:12.859513494Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:12.862679 containerd[1535]: time="2025-09-11T23:31:12.862645534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:12.865807 containerd[1535]: time="2025-09-11T23:31:12.864245054Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.25432156s" Sep 11 23:31:12.865807 containerd[1535]: time="2025-09-11T23:31:12.864298974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 11 23:31:12.866446 containerd[1535]: time="2025-09-11T23:31:12.866414454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 11 23:31:14.633423 containerd[1535]: time="2025-09-11T23:31:14.633373374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:14.634376 containerd[1535]: time="2025-09-11T23:31:14.634225974Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 11 23:31:14.635132 containerd[1535]: time="2025-09-11T23:31:14.635096374Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:14.637811 containerd[1535]: time="2025-09-11T23:31:14.637771374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:14.639156 containerd[1535]: time="2025-09-11T23:31:14.639112694Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.77265544s" Sep 11 23:31:14.639234 containerd[1535]: time="2025-09-11T23:31:14.639169814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 11 23:31:14.640231 containerd[1535]: time="2025-09-11T23:31:14.640009294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 11 23:31:15.935750 containerd[1535]: time="2025-09-11T23:31:15.935232494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:15.936084 containerd[1535]: time="2025-09-11T23:31:15.935764574Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 11 23:31:15.936792 containerd[1535]: time="2025-09-11T23:31:15.936766534Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:15.940405 containerd[1535]: time="2025-09-11T23:31:15.940353414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:15.941326 containerd[1535]: time="2025-09-11T23:31:15.941292814Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.3012452s" Sep 11 23:31:15.941374 containerd[1535]: time="2025-09-11T23:31:15.941330734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 11 23:31:15.941906 containerd[1535]: time="2025-09-11T23:31:15.941725534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 11 23:31:17.211839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854082259.mount: Deactivated successfully. Sep 11 23:31:17.544924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 23:31:17.547007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:17.692187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:17.703519 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 23:31:17.756331 kubelet[2048]: E0911 23:31:17.756252 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 23:31:17.763631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 23:31:17.763768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 23:31:17.764585 systemd[1]: kubelet.service: Consumed 162ms CPU time, 108.1M memory peak. Sep 11 23:31:17.951265 containerd[1535]: time="2025-09-11T23:31:17.950954094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:17.952250 containerd[1535]: time="2025-09-11T23:31:17.952177334Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 11 23:31:17.953274 containerd[1535]: time="2025-09-11T23:31:17.953224534Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:17.955316 containerd[1535]: time="2025-09-11T23:31:17.955269374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:17.955907 containerd[1535]: time="2025-09-11T23:31:17.955722774Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 2.01396112s" Sep 11 23:31:17.955907 containerd[1535]: time="2025-09-11T23:31:17.955757374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 11 23:31:17.956362 containerd[1535]: time="2025-09-11T23:31:17.956341814Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 23:31:18.467627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282549343.mount: Deactivated successfully. Sep 11 23:31:19.232993 containerd[1535]: time="2025-09-11T23:31:19.232478054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:19.232993 containerd[1535]: time="2025-09-11T23:31:19.232982254Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 11 23:31:19.234135 containerd[1535]: time="2025-09-11T23:31:19.234078614Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:19.236782 containerd[1535]: time="2025-09-11T23:31:19.236723934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:19.238608 containerd[1535]: time="2025-09-11T23:31:19.237960574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.28157288s" Sep 11 23:31:19.238608 containerd[1535]: time="2025-09-11T23:31:19.237996254Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 11 23:31:19.238608 containerd[1535]: time="2025-09-11T23:31:19.238402494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 23:31:19.680281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821182607.mount: Deactivated successfully. Sep 11 23:31:19.686010 containerd[1535]: time="2025-09-11T23:31:19.685638494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:31:19.686447 containerd[1535]: time="2025-09-11T23:31:19.686418054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 23:31:19.687519 containerd[1535]: time="2025-09-11T23:31:19.687490014Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:31:19.689522 containerd[1535]: time="2025-09-11T23:31:19.689489694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 23:31:19.690887 containerd[1535]: time="2025-09-11T23:31:19.690645974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.2154ms" Sep 11 23:31:19.690887 containerd[1535]: time="2025-09-11T23:31:19.690682814Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 23:31:19.691262 containerd[1535]: time="2025-09-11T23:31:19.691124294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 11 23:31:20.255495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915772808.mount: Deactivated successfully. Sep 11 23:31:21.882078 containerd[1535]: time="2025-09-11T23:31:21.882001654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:21.882580 containerd[1535]: time="2025-09-11T23:31:21.882540174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 11 23:31:21.883623 containerd[1535]: time="2025-09-11T23:31:21.883586334Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:21.886268 containerd[1535]: time="2025-09-11T23:31:21.886230494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:21.888197 containerd[1535]: time="2025-09-11T23:31:21.888131334Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.19697864s" Sep 11 23:31:21.888245 containerd[1535]: time="2025-09-11T23:31:21.888202094Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 11 23:31:27.246934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:27.247073 systemd[1]: kubelet.service: Consumed 162ms CPU time, 108.1M memory peak. Sep 11 23:31:27.248919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:27.269625 systemd[1]: Reload requested from client PID 2197 ('systemctl') (unit session-7.scope)... Sep 11 23:31:27.269640 systemd[1]: Reloading... Sep 11 23:31:27.334774 zram_generator::config[2240]: No configuration found. Sep 11 23:31:27.451535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:31:27.535810 systemd[1]: Reloading finished in 265 ms. Sep 11 23:31:27.595687 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 23:31:27.595761 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 23:31:27.595994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:27.596037 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95M memory peak. Sep 11 23:31:27.598366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:27.707540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:27.711988 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:31:27.749330 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:31:27.749330 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 23:31:27.749330 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:31:27.749665 kubelet[2285]: I0911 23:31:27.749370 2285 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:31:28.954725 kubelet[2285]: I0911 23:31:28.954667 2285 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 23:31:28.954725 kubelet[2285]: I0911 23:31:28.954701 2285 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:31:28.955258 kubelet[2285]: I0911 23:31:28.955244 2285 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 23:31:28.972923 kubelet[2285]: E0911 23:31:28.972887 2285 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:28.974241 kubelet[2285]: I0911 23:31:28.974222 2285 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:31:28.981233 kubelet[2285]: I0911 23:31:28.981211 2285 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:31:28.984686 kubelet[2285]: I0911 23:31:28.984659 2285 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:31:28.985412 kubelet[2285]: I0911 23:31:28.985382 2285 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 23:31:28.985543 kubelet[2285]: I0911 23:31:28.985517 2285 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:31:28.985699 kubelet[2285]: I0911 23:31:28.985542 2285 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:31:28.985777 kubelet[2285]: I0911 23:31:28.985763 2285 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:31:28.985777 kubelet[2285]: I0911 23:31:28.985771 2285 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 23:31:28.986020 kubelet[2285]: I0911 23:31:28.986006 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:31:28.988283 kubelet[2285]: I0911 23:31:28.987799 2285 kubelet.go:408] "Attempting to sync node with API server" Sep 11 23:31:28.988283 kubelet[2285]: I0911 23:31:28.987822 2285 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:31:28.988283 kubelet[2285]: I0911 23:31:28.987842 2285 kubelet.go:314] "Adding apiserver pod source" Sep 11 23:31:28.988283 kubelet[2285]: I0911 23:31:28.987913 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:31:28.991037 kubelet[2285]: W0911 23:31:28.990982 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 11 23:31:28.991143 kubelet[2285]: E0911 23:31:28.991124 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:28.991229 kubelet[2285]: W0911 23:31:28.990992 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 11 23:31:28.991296 kubelet[2285]: E0911 23:31:28.991283 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:28.992070 kubelet[2285]: I0911 23:31:28.992037 2285 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 23:31:28.992776 kubelet[2285]: I0911 23:31:28.992749 2285 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:31:28.992940 kubelet[2285]: W0911 23:31:28.992922 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 23:31:28.994224 kubelet[2285]: I0911 23:31:28.994199 2285 server.go:1274] "Started kubelet" Sep 11 23:31:28.994561 kubelet[2285]: I0911 23:31:28.994521 2285 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:31:28.994936 kubelet[2285]: I0911 23:31:28.994865 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:31:28.995207 kubelet[2285]: I0911 23:31:28.995184 2285 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:31:28.996793 kubelet[2285]: I0911 23:31:28.996770 2285 server.go:449] "Adding debug handlers to kubelet server" Sep 11 23:31:28.997742 kubelet[2285]: I0911 23:31:28.997715 2285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:31:28.998100 kubelet[2285]: I0911 23:31:28.998087 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:31:29.000389 kubelet[2285]: E0911 23:31:29.000362 2285 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:31:29.000389 kubelet[2285]: I0911 23:31:29.000364 2285 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 23:31:29.000483 kubelet[2285]: I0911 23:31:29.000372 2285 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 23:31:29.000508 kubelet[2285]: I0911 23:31:29.000494 2285 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:31:29.000931 kubelet[2285]: W0911 23:31:29.000874 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 11 23:31:29.000931 kubelet[2285]: E0911 23:31:29.000923 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:29.001006 kubelet[2285]: E0911 23:31:29.000986 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Sep 11 23:31:29.001063 kubelet[2285]: E0911 23:31:29.001042 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:31:29.001233 kubelet[2285]: I0911 23:31:29.001210 2285 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:31:29.001328 kubelet[2285]: I0911 23:31:29.001308 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:31:29.002159 kubelet[2285]: E0911 23:31:29.000807 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18645e588cd40826 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 23:31:28.994175014 +0000 UTC m=+1.278638921,LastTimestamp:2025-09-11 23:31:28.994175014 +0000 UTC m=+1.278638921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 23:31:29.002420 kubelet[2285]: I0911 23:31:29.002393 2285 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:31:29.012133 kubelet[2285]: I0911 23:31:29.012063 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:31:29.013170 kubelet[2285]: I0911 23:31:29.013127 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:31:29.013170 kubelet[2285]: I0911 23:31:29.013161 2285 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 23:31:29.013240 kubelet[2285]: I0911 23:31:29.013183 2285 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 23:31:29.013240 kubelet[2285]: E0911 23:31:29.013224 2285 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:31:29.017863 kubelet[2285]: I0911 23:31:29.017846 2285 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 23:31:29.017863 kubelet[2285]: I0911 23:31:29.017861 2285 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 23:31:29.017863 kubelet[2285]: W0911 23:31:29.017829 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 11 23:31:29.017863 kubelet[2285]: I0911 23:31:29.017877 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:31:29.018071 kubelet[2285]: E0911 23:31:29.017895 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:29.101360 kubelet[2285]: E0911 23:31:29.101319 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:31:29.101952 kubelet[2285]: I0911 23:31:29.101922 2285 policy_none.go:49] "None policy: Start" Sep 11 23:31:29.102599 kubelet[2285]: I0911 23:31:29.102583 2285 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 23:31:29.102671 kubelet[2285]: I0911 23:31:29.102627 2285 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:31:29.110977 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 23:31:29.113333 kubelet[2285]: E0911 23:31:29.113314 2285 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 23:31:29.123975 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 23:31:29.126526 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 23:31:29.147000 kubelet[2285]: I0911 23:31:29.146928 2285 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:31:29.147165 kubelet[2285]: I0911 23:31:29.147131 2285 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:31:29.147208 kubelet[2285]: I0911 23:31:29.147155 2285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:31:29.147527 kubelet[2285]: I0911 23:31:29.147440 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:31:29.148943 kubelet[2285]: E0911 23:31:29.148863 2285 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 23:31:29.202475 kubelet[2285]: E0911 23:31:29.202431 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Sep 11 23:31:29.248937 kubelet[2285]: I0911 23:31:29.248887 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:31:29.249460 kubelet[2285]: E0911 23:31:29.249400 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 11 23:31:29.320819 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 11 23:31:29.348010 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 11 23:31:29.362505 systemd[1]: Created slice kubepods-burstable-pod61a01d99ffd18a200ca8606b497aeb54.slice - libcontainer container kubepods-burstable-pod61a01d99ffd18a200ca8606b497aeb54.slice. Sep 11 23:31:29.403294 kubelet[2285]: I0911 23:31:29.403246 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:29.403294 kubelet[2285]: I0911 23:31:29.403284 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:29.403294 kubelet[2285]: I0911 23:31:29.403303 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:29.403424 kubelet[2285]: I0911 23:31:29.403320 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:29.403424 kubelet[2285]: I0911 23:31:29.403337 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:29.403424 kubelet[2285]: I0911 23:31:29.403352 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:29.403424 kubelet[2285]: I0911 23:31:29.403368 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:29.403424 kubelet[2285]: I0911 23:31:29.403384 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:29.403536 kubelet[2285]: I0911 23:31:29.403398 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:31:29.451398 kubelet[2285]: I0911 23:31:29.451358 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:31:29.451787 kubelet[2285]: E0911 23:31:29.451748 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 11 23:31:29.603653 kubelet[2285]: E0911 23:31:29.603542 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Sep 11 23:31:29.645944 kubelet[2285]: E0911 23:31:29.645881 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.646491 containerd[1535]: time="2025-09-11T23:31:29.646450734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 11 23:31:29.661031 kubelet[2285]: E0911 23:31:29.661004 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.661422 containerd[1535]: time="2025-09-11T23:31:29.661385134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 11 23:31:29.663675 containerd[1535]: time="2025-09-11T23:31:29.663643694Z" level=info msg="connecting to shim 3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568" address="unix:///run/containerd/s/38ea887e953ff89702859c5f9237742f04941f07345b5296b927a99c1287fcec" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:29.664983 kubelet[2285]: E0911 23:31:29.664947 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.666137 containerd[1535]: time="2025-09-11T23:31:29.665887574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61a01d99ffd18a200ca8606b497aeb54,Namespace:kube-system,Attempt:0,}" Sep 11 23:31:29.685721 containerd[1535]: time="2025-09-11T23:31:29.685680734Z" level=info msg="connecting to shim acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71" address="unix:///run/containerd/s/5748cdd1d622a9b214f6cecf9f20d76e6d881629602e7ec1b1a58f697bac5d25" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:29.693116 containerd[1535]: time="2025-09-11T23:31:29.693079734Z" level=info msg="connecting to shim 063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655" address="unix:///run/containerd/s/ff184b4c395e2bdd601fdb38a899fbf0ad1cd787bde567e3935eb368c2c804c9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:29.701396 systemd[1]: Started cri-containerd-3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568.scope - libcontainer container 3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568. Sep 11 23:31:29.709119 systemd[1]: Started cri-containerd-acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71.scope - libcontainer container acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71. Sep 11 23:31:29.716929 systemd[1]: Started cri-containerd-063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655.scope - libcontainer container 063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655. Sep 11 23:31:29.739519 containerd[1535]: time="2025-09-11T23:31:29.739483894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568\"" Sep 11 23:31:29.741191 kubelet[2285]: E0911 23:31:29.741169 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.743870 containerd[1535]: time="2025-09-11T23:31:29.743492934Z" level=info msg="CreateContainer within sandbox \"3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 23:31:29.748366 containerd[1535]: time="2025-09-11T23:31:29.748324774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71\"" Sep 11 23:31:29.748934 kubelet[2285]: E0911 23:31:29.748907 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.750974 containerd[1535]: time="2025-09-11T23:31:29.750891734Z" level=info msg="CreateContainer within sandbox \"acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 23:31:29.753349 containerd[1535]: time="2025-09-11T23:31:29.753318494Z" level=info msg="Container 67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:29.759619 containerd[1535]: time="2025-09-11T23:31:29.759583214Z" level=info msg="Container 54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:29.763077 containerd[1535]: time="2025-09-11T23:31:29.763050134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61a01d99ffd18a200ca8606b497aeb54,Namespace:kube-system,Attempt:0,} returns sandbox id \"063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655\"" Sep 11 23:31:29.763733 kubelet[2285]: E0911 23:31:29.763710 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:29.764605 containerd[1535]: time="2025-09-11T23:31:29.764448454Z" level=info msg="CreateContainer within sandbox \"3ee02ddda20e33184b31a987ad800b822344b2707414c1ce1ae9c47855e1e568\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa\"" Sep 11 23:31:29.764954 containerd[1535]: time="2025-09-11T23:31:29.764933334Z" level=info msg="StartContainer for \"67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa\"" Sep 11 23:31:29.765618 containerd[1535]: time="2025-09-11T23:31:29.765589894Z" level=info msg="CreateContainer within sandbox \"063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 23:31:29.766191 containerd[1535]: time="2025-09-11T23:31:29.766165574Z" level=info msg="connecting to shim 67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa" address="unix:///run/containerd/s/38ea887e953ff89702859c5f9237742f04941f07345b5296b927a99c1287fcec" protocol=ttrpc version=3 Sep 11 23:31:29.768959 containerd[1535]: time="2025-09-11T23:31:29.768926454Z" level=info msg="CreateContainer within sandbox \"acfc4c9921ce60bb6a6ba57c061d70a3f689fb2646ef6e42d2a47ef39dcd5b71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0\"" Sep 11 23:31:29.769276 containerd[1535]: time="2025-09-11T23:31:29.769241294Z" level=info msg="StartContainer for \"54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0\"" Sep 11 23:31:29.770379 containerd[1535]: time="2025-09-11T23:31:29.770354174Z" level=info msg="connecting to shim 54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0" address="unix:///run/containerd/s/5748cdd1d622a9b214f6cecf9f20d76e6d881629602e7ec1b1a58f697bac5d25" protocol=ttrpc version=3 Sep 11 23:31:29.771990 containerd[1535]: time="2025-09-11T23:31:29.771969774Z" level=info msg="Container 7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:29.779549 containerd[1535]: time="2025-09-11T23:31:29.779521014Z" level=info msg="CreateContainer within sandbox \"063f69a76d76116c2d070748e304998e476614d146c97ae3076dafd024cde655\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a\"" Sep 11 23:31:29.779965 containerd[1535]: time="2025-09-11T23:31:29.779947374Z" level=info msg="StartContainer for \"7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a\"" Sep 11 23:31:29.781098 containerd[1535]: time="2025-09-11T23:31:29.781069774Z" level=info msg="connecting to shim 7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a" address="unix:///run/containerd/s/ff184b4c395e2bdd601fdb38a899fbf0ad1cd787bde567e3935eb368c2c804c9" protocol=ttrpc version=3 Sep 11 23:31:29.788302 systemd[1]: Started cri-containerd-67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa.scope - libcontainer container 67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa. Sep 11 23:31:29.791262 systemd[1]: Started cri-containerd-54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0.scope - libcontainer container 54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0. Sep 11 23:31:29.798085 systemd[1]: Started cri-containerd-7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a.scope - libcontainer container 7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a. Sep 11 23:31:29.836399 containerd[1535]: time="2025-09-11T23:31:29.836354774Z" level=info msg="StartContainer for \"67bbbe79545448ed8e9a4cd3c19de5ebc7fdf40aaeb22541976507fc91710baa\" returns successfully" Sep 11 23:31:29.839281 containerd[1535]: time="2025-09-11T23:31:29.839247214Z" level=info msg="StartContainer for \"7cbc1fcf656945aa00ee025785c024b184f9d9edb59eb36b3a86d0b9f22cc03a\" returns successfully" Sep 11 23:31:29.842424 containerd[1535]: time="2025-09-11T23:31:29.842389454Z" level=info msg="StartContainer for \"54433a50631f04a7b001e3ff18d04d1396f3366c0b311963e0fb26e9fd4c26c0\" returns successfully" Sep 11 23:31:29.853344 kubelet[2285]: I0911 23:31:29.853318 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:31:29.854812 kubelet[2285]: E0911 23:31:29.853773 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 11 23:31:29.914650 kubelet[2285]: W0911 23:31:29.914535 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 11 23:31:29.914650 kubelet[2285]: E0911 23:31:29.914602 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 11 23:31:30.024126 kubelet[2285]: E0911 23:31:30.024070 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:30.029103 kubelet[2285]: E0911 23:31:30.029006 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:30.029391 kubelet[2285]: E0911 23:31:30.029372 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:30.656108 kubelet[2285]: I0911 23:31:30.656081 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:31:31.032912 kubelet[2285]: E0911 23:31:31.032611 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:31.154604 kubelet[2285]: E0911 23:31:31.154555 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 23:31:31.225594 kubelet[2285]: I0911 23:31:31.225553 2285 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 23:31:31.310178 kubelet[2285]: E0911 23:31:31.309657 2285 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:31.310439 kubelet[2285]: E0911 23:31:31.310405 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:31.989866 kubelet[2285]: I0911 23:31:31.989826 2285 apiserver.go:52] "Watching apiserver" Sep 11 23:31:32.000602 kubelet[2285]: I0911 23:31:32.000557 2285 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 23:31:32.035017 kubelet[2285]: E0911 23:31:32.034988 2285 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:32.035332 kubelet[2285]: E0911 23:31:32.035139 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:33.147408 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... Sep 11 23:31:33.147423 systemd[1]: Reloading... Sep 11 23:31:33.216183 zram_generator::config[2601]: No configuration found. Sep 11 23:31:33.363299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 23:31:33.459680 systemd[1]: Reloading finished in 311 ms. Sep 11 23:31:33.495854 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:33.524961 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 23:31:33.525211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:33.525266 systemd[1]: kubelet.service: Consumed 1.635s CPU time, 128.9M memory peak. Sep 11 23:31:33.529478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 23:31:33.673289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 23:31:33.683484 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 23:31:33.729663 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:31:33.729663 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 11 23:31:33.729663 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 23:31:33.729663 kubelet[2643]: I0911 23:31:33.729326 2643 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 23:31:33.734425 kubelet[2643]: I0911 23:31:33.734391 2643 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 11 23:31:33.734425 kubelet[2643]: I0911 23:31:33.734418 2643 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 23:31:33.734666 kubelet[2643]: I0911 23:31:33.734640 2643 server.go:934] "Client rotation is on, will bootstrap in background" Sep 11 23:31:33.735914 kubelet[2643]: I0911 23:31:33.735889 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 23:31:33.737993 kubelet[2643]: I0911 23:31:33.737970 2643 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 23:31:33.746103 kubelet[2643]: I0911 23:31:33.746070 2643 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 23:31:33.751340 kubelet[2643]: I0911 23:31:33.751319 2643 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 23:31:33.751451 kubelet[2643]: I0911 23:31:33.751436 2643 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 11 23:31:33.751562 kubelet[2643]: I0911 23:31:33.751536 2643 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 23:31:33.751715 kubelet[2643]: I0911 23:31:33.751561 2643 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 23:31:33.751782 kubelet[2643]: I0911 23:31:33.751722 2643 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 23:31:33.751782 kubelet[2643]: I0911 23:31:33.751736 2643 container_manager_linux.go:300] "Creating device plugin manager" Sep 11 23:31:33.751782 kubelet[2643]: I0911 23:31:33.751771 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:31:33.751877 kubelet[2643]: I0911 23:31:33.751863 2643 kubelet.go:408] "Attempting to sync node with API server" Sep 11 23:31:33.751900 kubelet[2643]: I0911 23:31:33.751881 2643 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 23:31:33.751900 kubelet[2643]: I0911 23:31:33.751898 2643 kubelet.go:314] "Adding apiserver pod source" Sep 11 23:31:33.751938 kubelet[2643]: I0911 23:31:33.751908 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 23:31:33.752426 kubelet[2643]: I0911 23:31:33.752403 2643 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 23:31:33.752928 kubelet[2643]: I0911 23:31:33.752909 2643 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 23:31:33.757254 kubelet[2643]: I0911 23:31:33.757231 2643 server.go:1274] "Started kubelet" Sep 11 23:31:33.759227 kubelet[2643]: I0911 23:31:33.759210 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 23:31:33.759536 kubelet[2643]: I0911 23:31:33.759520 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 23:31:33.759771 kubelet[2643]: I0911 23:31:33.759744 2643 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 23:31:33.759931 kubelet[2643]: I0911 23:31:33.759875 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 23:31:33.760166 kubelet[2643]: I0911 23:31:33.760124 2643 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 23:31:33.761011 kubelet[2643]: I0911 23:31:33.760990 2643 server.go:449] "Adding debug handlers to kubelet server" Sep 11 23:31:33.761511 kubelet[2643]: I0911 23:31:33.761489 2643 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 11 23:31:33.762673 kubelet[2643]: I0911 23:31:33.762638 2643 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 11 23:31:33.762768 kubelet[2643]: I0911 23:31:33.762752 2643 reconciler.go:26] "Reconciler: start to sync state" Sep 11 23:31:33.768331 kubelet[2643]: E0911 23:31:33.768306 2643 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 23:31:33.768493 kubelet[2643]: E0911 23:31:33.768346 2643 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 23:31:33.770053 kubelet[2643]: I0911 23:31:33.770028 2643 factory.go:221] Registration of the systemd container factory successfully Sep 11 23:31:33.770267 kubelet[2643]: I0911 23:31:33.770243 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 23:31:33.777963 kubelet[2643]: I0911 23:31:33.777908 2643 factory.go:221] Registration of the containerd container factory successfully Sep 11 23:31:33.784625 kubelet[2643]: I0911 23:31:33.784487 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 23:31:33.786509 kubelet[2643]: I0911 23:31:33.786485 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 23:31:33.786604 kubelet[2643]: I0911 23:31:33.786594 2643 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 11 23:31:33.786713 kubelet[2643]: I0911 23:31:33.786701 2643 kubelet.go:2321] "Starting kubelet main sync loop" Sep 11 23:31:33.786807 kubelet[2643]: E0911 23:31:33.786791 2643 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 23:31:33.816768 kubelet[2643]: I0911 23:31:33.816745 2643 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 11 23:31:33.817132 kubelet[2643]: I0911 23:31:33.816898 2643 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 11 23:31:33.817132 kubelet[2643]: I0911 23:31:33.816922 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 11 23:31:33.817132 kubelet[2643]: I0911 23:31:33.817068 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 23:31:33.817132 kubelet[2643]: I0911 23:31:33.817077 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 23:31:33.817132 kubelet[2643]: I0911 23:31:33.817095 2643 policy_none.go:49] "None policy: Start" Sep 11 23:31:33.817837 kubelet[2643]: I0911 23:31:33.817821 2643 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 11 23:31:33.818014 kubelet[2643]: I0911 23:31:33.818003 2643 state_mem.go:35] "Initializing new in-memory state store" Sep 11 23:31:33.818288 kubelet[2643]: I0911 23:31:33.818271 2643 state_mem.go:75] "Updated machine memory state" Sep 11 23:31:33.822457 kubelet[2643]: I0911 23:31:33.822303 2643 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 23:31:33.822532 kubelet[2643]: I0911 23:31:33.822473 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 23:31:33.822532 kubelet[2643]: I0911 23:31:33.822484 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 23:31:33.823117 kubelet[2643]: I0911 23:31:33.823093 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 23:31:33.926691 kubelet[2643]: I0911 23:31:33.926501 2643 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 11 23:31:33.933072 kubelet[2643]: I0911 23:31:33.933033 2643 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 11 23:31:33.933185 kubelet[2643]: I0911 23:31:33.933130 2643 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 11 23:31:33.964327 kubelet[2643]: I0911 23:31:33.964211 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:33.964327 kubelet[2643]: I0911 23:31:33.964252 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:33.964327 kubelet[2643]: I0911 23:31:33.964282 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 11 23:31:33.964327 kubelet[2643]: I0911 23:31:33.964302 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:33.964524 kubelet[2643]: I0911 23:31:33.964341 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61a01d99ffd18a200ca8606b497aeb54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61a01d99ffd18a200ca8606b497aeb54\") " pod="kube-system/kube-apiserver-localhost" Sep 11 23:31:33.964524 kubelet[2643]: I0911 23:31:33.964378 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:33.964524 kubelet[2643]: I0911 23:31:33.964408 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:33.964524 kubelet[2643]: I0911 23:31:33.964432 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:33.964524 kubelet[2643]: I0911 23:31:33.964446 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 23:31:34.194117 kubelet[2643]: E0911 23:31:34.193923 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:34.194117 kubelet[2643]: E0911 23:31:34.193965 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:34.194117 kubelet[2643]: E0911 23:31:34.193998 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:34.754872 kubelet[2643]: I0911 23:31:34.754811 2643 apiserver.go:52] "Watching apiserver" Sep 11 23:31:34.763261 kubelet[2643]: I0911 23:31:34.763240 2643 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 11 23:31:34.783710 kubelet[2643]: I0911 23:31:34.783632 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.783619094 podStartE2EDuration="1.783619094s" podCreationTimestamp="2025-09-11 23:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:31:34.777183734 +0000 UTC m=+1.086434681" watchObservedRunningTime="2025-09-11 23:31:34.783619094 +0000 UTC m=+1.092870041" Sep 11 23:31:34.790224 kubelet[2643]: I0911 23:31:34.789804 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.789793014 podStartE2EDuration="1.789793014s" podCreationTimestamp="2025-09-11 23:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:31:34.784007774 +0000 UTC m=+1.093258721" watchObservedRunningTime="2025-09-11 23:31:34.789793014 +0000 UTC m=+1.099044001" Sep 11 23:31:34.790224 kubelet[2643]: I0911 23:31:34.789871 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7898670939999999 podStartE2EDuration="1.789867094s" podCreationTimestamp="2025-09-11 23:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:31:34.789627654 +0000 UTC m=+1.098878601" watchObservedRunningTime="2025-09-11 23:31:34.789867094 +0000 UTC m=+1.099118041" Sep 11 23:31:34.803545 kubelet[2643]: E0911 23:31:34.803511 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:34.803607 kubelet[2643]: E0911 23:31:34.803589 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:34.804044 kubelet[2643]: E0911 23:31:34.803640 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:35.805085 kubelet[2643]: E0911 23:31:35.805048 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:36.041698 kubelet[2643]: E0911 23:31:36.041658 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:37.690648 kubelet[2643]: E0911 23:31:37.690602 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:39.967198 systemd[1]: Created slice kubepods-besteffort-pod3d068234_cf01_41fa_8e72_7402573eb9df.slice - libcontainer container kubepods-besteffort-pod3d068234_cf01_41fa_8e72_7402573eb9df.slice. Sep 11 23:31:39.999044 kubelet[2643]: I0911 23:31:39.999018 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d068234-cf01-41fa-8e72-7402573eb9df-kube-proxy\") pod \"kube-proxy-hhwm4\" (UID: \"3d068234-cf01-41fa-8e72-7402573eb9df\") " pod="kube-system/kube-proxy-hhwm4" Sep 11 23:31:39.999044 kubelet[2643]: I0911 23:31:39.999023 2643 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 23:31:39.999044 kubelet[2643]: I0911 23:31:39.999052 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d068234-cf01-41fa-8e72-7402573eb9df-lib-modules\") pod \"kube-proxy-hhwm4\" (UID: \"3d068234-cf01-41fa-8e72-7402573eb9df\") " pod="kube-system/kube-proxy-hhwm4" Sep 11 23:31:39.999537 kubelet[2643]: I0911 23:31:39.999114 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d068234-cf01-41fa-8e72-7402573eb9df-xtables-lock\") pod \"kube-proxy-hhwm4\" (UID: \"3d068234-cf01-41fa-8e72-7402573eb9df\") " pod="kube-system/kube-proxy-hhwm4" Sep 11 23:31:39.999537 kubelet[2643]: I0911 23:31:39.999134 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfkjx\" (UniqueName: \"kubernetes.io/projected/3d068234-cf01-41fa-8e72-7402573eb9df-kube-api-access-xfkjx\") pod \"kube-proxy-hhwm4\" (UID: \"3d068234-cf01-41fa-8e72-7402573eb9df\") " pod="kube-system/kube-proxy-hhwm4" Sep 11 23:31:39.999582 containerd[1535]: time="2025-09-11T23:31:39.999520942Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 23:31:40.000448 kubelet[2643]: I0911 23:31:39.999767 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 23:31:40.107406 kubelet[2643]: E0911 23:31:40.107375 2643 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 11 23:31:40.107406 kubelet[2643]: E0911 23:31:40.107403 2643 projected.go:194] Error preparing data for projected volume kube-api-access-xfkjx for pod kube-system/kube-proxy-hhwm4: configmap "kube-root-ca.crt" not found Sep 11 23:31:40.107581 kubelet[2643]: E0911 23:31:40.107458 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d068234-cf01-41fa-8e72-7402573eb9df-kube-api-access-xfkjx podName:3d068234-cf01-41fa-8e72-7402573eb9df nodeName:}" failed. No retries permitted until 2025-09-11 23:31:40.607438641 +0000 UTC m=+6.916689588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xfkjx" (UniqueName: "kubernetes.io/projected/3d068234-cf01-41fa-8e72-7402573eb9df-kube-api-access-xfkjx") pod "kube-proxy-hhwm4" (UID: "3d068234-cf01-41fa-8e72-7402573eb9df") : configmap "kube-root-ca.crt" not found Sep 11 23:31:40.703281 kubelet[2643]: E0911 23:31:40.703222 2643 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 11 23:31:40.703281 kubelet[2643]: E0911 23:31:40.703260 2643 projected.go:194] Error preparing data for projected volume kube-api-access-xfkjx for pod kube-system/kube-proxy-hhwm4: configmap "kube-root-ca.crt" not found Sep 11 23:31:40.703452 kubelet[2643]: E0911 23:31:40.703312 2643 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d068234-cf01-41fa-8e72-7402573eb9df-kube-api-access-xfkjx podName:3d068234-cf01-41fa-8e72-7402573eb9df nodeName:}" failed. No retries permitted until 2025-09-11 23:31:41.703297435 +0000 UTC m=+8.012548382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xfkjx" (UniqueName: "kubernetes.io/projected/3d068234-cf01-41fa-8e72-7402573eb9df-kube-api-access-xfkjx") pod "kube-proxy-hhwm4" (UID: "3d068234-cf01-41fa-8e72-7402573eb9df") : configmap "kube-root-ca.crt" not found Sep 11 23:31:41.075205 systemd[1]: Created slice kubepods-besteffort-pod82d891b0_e0fa_4e63_b75f_5a67f7174e95.slice - libcontainer container kubepods-besteffort-pod82d891b0_e0fa_4e63_b75f_5a67f7174e95.slice. Sep 11 23:31:41.206352 kubelet[2643]: I0911 23:31:41.206295 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82d891b0-e0fa-4e63-b75f-5a67f7174e95-var-lib-calico\") pod \"tigera-operator-58fc44c59b-5lrnc\" (UID: \"82d891b0-e0fa-4e63-b75f-5a67f7174e95\") " pod="tigera-operator/tigera-operator-58fc44c59b-5lrnc" Sep 11 23:31:41.206352 kubelet[2643]: I0911 23:31:41.206359 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlpg\" (UniqueName: \"kubernetes.io/projected/82d891b0-e0fa-4e63-b75f-5a67f7174e95-kube-api-access-rmlpg\") pod \"tigera-operator-58fc44c59b-5lrnc\" (UID: \"82d891b0-e0fa-4e63-b75f-5a67f7174e95\") " pod="tigera-operator/tigera-operator-58fc44c59b-5lrnc" Sep 11 23:31:41.378462 containerd[1535]: time="2025-09-11T23:31:41.378351927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5lrnc,Uid:82d891b0-e0fa-4e63-b75f-5a67f7174e95,Namespace:tigera-operator,Attempt:0,}" Sep 11 23:31:41.401897 containerd[1535]: time="2025-09-11T23:31:41.401379403Z" level=info msg="connecting to shim 63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2" address="unix:///run/containerd/s/de39b26b6361d4e3914935f2d69b9f036cb817a98206bb2bd1740f89c1f06ced" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:41.430364 systemd[1]: Started cri-containerd-63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2.scope - libcontainer container 63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2. Sep 11 23:31:41.464986 containerd[1535]: time="2025-09-11T23:31:41.464930470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5lrnc,Uid:82d891b0-e0fa-4e63-b75f-5a67f7174e95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2\"" Sep 11 23:31:41.466566 containerd[1535]: time="2025-09-11T23:31:41.466537379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 23:31:41.778569 kubelet[2643]: E0911 23:31:41.778518 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:41.779490 containerd[1535]: time="2025-09-11T23:31:41.779458310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhwm4,Uid:3d068234-cf01-41fa-8e72-7402573eb9df,Namespace:kube-system,Attempt:0,}" Sep 11 23:31:41.806720 containerd[1535]: time="2025-09-11T23:31:41.806635957Z" level=info msg="connecting to shim c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0" address="unix:///run/containerd/s/0ddf15b7fb0fc62938026f22f48028fb581fb4022a6b20629f7f1f899d55e7a2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:41.834331 systemd[1]: Started cri-containerd-c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0.scope - libcontainer container c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0. Sep 11 23:31:41.861278 containerd[1535]: time="2025-09-11T23:31:41.861133328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhwm4,Uid:3d068234-cf01-41fa-8e72-7402573eb9df,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0\"" Sep 11 23:31:41.862007 kubelet[2643]: E0911 23:31:41.861983 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:41.864548 containerd[1535]: time="2025-09-11T23:31:41.864144147Z" level=info msg="CreateContainer within sandbox \"c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 23:31:41.874031 containerd[1535]: time="2025-09-11T23:31:41.873995157Z" level=info msg="Container edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:41.883429 containerd[1535]: time="2025-09-11T23:31:41.883383050Z" level=info msg="CreateContainer within sandbox \"c4a487967679f69009353517e46324f2acc7cefe082acabd6fe771fd79f680c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b\"" Sep 11 23:31:41.884269 containerd[1535]: time="2025-09-11T23:31:41.884241244Z" level=info msg="StartContainer for \"edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b\"" Sep 11 23:31:41.885850 containerd[1535]: time="2025-09-11T23:31:41.885823313Z" level=info msg="connecting to shim edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b" address="unix:///run/containerd/s/0ddf15b7fb0fc62938026f22f48028fb581fb4022a6b20629f7f1f899d55e7a2" protocol=ttrpc version=3 Sep 11 23:31:41.917358 systemd[1]: Started cri-containerd-edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b.scope - libcontainer container edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b. Sep 11 23:31:41.966265 containerd[1535]: time="2025-09-11T23:31:41.966206860Z" level=info msg="StartContainer for \"edb62d3470524746e09ac6c2cedaad0e3f5c7864d0bd1c32708efa5d3d5f169b\" returns successfully" Sep 11 23:31:42.820805 kubelet[2643]: E0911 23:31:42.820774 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:42.829766 kubelet[2643]: I0911 23:31:42.829701 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hhwm4" podStartSLOduration=3.82968748 podStartE2EDuration="3.82968748s" podCreationTimestamp="2025-09-11 23:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:31:42.828840805 +0000 UTC m=+9.138091752" watchObservedRunningTime="2025-09-11 23:31:42.82968748 +0000 UTC m=+9.138938427" Sep 11 23:31:42.927878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396124556.mount: Deactivated successfully. Sep 11 23:31:43.249217 containerd[1535]: time="2025-09-11T23:31:43.249170263Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:43.249958 containerd[1535]: time="2025-09-11T23:31:43.249925218Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 11 23:31:43.251178 containerd[1535]: time="2025-09-11T23:31:43.251121811Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:43.254886 containerd[1535]: time="2025-09-11T23:31:43.254491949Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.787885932s" Sep 11 23:31:43.254886 containerd[1535]: time="2025-09-11T23:31:43.254547429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 11 23:31:43.257134 containerd[1535]: time="2025-09-11T23:31:43.257084893Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:43.260907 containerd[1535]: time="2025-09-11T23:31:43.260864830Z" level=info msg="CreateContainer within sandbox \"63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 23:31:43.267026 containerd[1535]: time="2025-09-11T23:31:43.266518514Z" level=info msg="Container f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:43.271712 containerd[1535]: time="2025-09-11T23:31:43.271661322Z" level=info msg="CreateContainer within sandbox \"63482a064b70714b1ca23bcadf78466e6b2b214d869b19f7d761cd8519330ce2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4\"" Sep 11 23:31:43.272098 containerd[1535]: time="2025-09-11T23:31:43.272070039Z" level=info msg="StartContainer for \"f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4\"" Sep 11 23:31:43.273056 containerd[1535]: time="2025-09-11T23:31:43.273020953Z" level=info msg="connecting to shim f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4" address="unix:///run/containerd/s/de39b26b6361d4e3914935f2d69b9f036cb817a98206bb2bd1740f89c1f06ced" protocol=ttrpc version=3 Sep 11 23:31:43.298339 systemd[1]: Started cri-containerd-f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4.scope - libcontainer container f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4. Sep 11 23:31:43.323422 containerd[1535]: time="2025-09-11T23:31:43.323383598Z" level=info msg="StartContainer for \"f4fcf8b45616280e1f5cc58eaf679735d746f99dfcc216bace4cf9035441d4c4\" returns successfully" Sep 11 23:31:43.665760 kubelet[2643]: E0911 23:31:43.665662 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:43.828250 kubelet[2643]: E0911 23:31:43.828212 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:43.856050 kubelet[2643]: I0911 23:31:43.855789 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-5lrnc" podStartSLOduration=1.062481848 podStartE2EDuration="2.855768906s" podCreationTimestamp="2025-09-11 23:31:41 +0000 UTC" firstStartedPulling="2025-09-11 23:31:41.466122941 +0000 UTC m=+7.775373888" lastFinishedPulling="2025-09-11 23:31:43.259409999 +0000 UTC m=+9.568660946" observedRunningTime="2025-09-11 23:31:43.855549907 +0000 UTC m=+10.164800894" watchObservedRunningTime="2025-09-11 23:31:43.855768906 +0000 UTC m=+10.165019853" Sep 11 23:31:46.051633 kubelet[2643]: E0911 23:31:46.051429 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:47.699252 kubelet[2643]: E0911 23:31:47.699218 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:49.000333 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 11 23:31:49.002602 sshd[1731]: Connection closed by 10.0.0.1 port 48044 Sep 11 23:31:49.003455 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 11 23:31:49.006932 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:48044.service: Deactivated successfully. Sep 11 23:31:49.009834 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 23:31:49.010390 systemd[1]: session-7.scope: Consumed 7.127s CPU time, 223.5M memory peak. Sep 11 23:31:49.012335 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. Sep 11 23:31:49.015312 systemd-logind[1506]: Removed session 7. Sep 11 23:31:50.049258 update_engine[1509]: I20250911 23:31:50.049176 1509 update_attempter.cc:509] Updating boot flags... Sep 11 23:31:52.435261 systemd[1]: Created slice kubepods-besteffort-podf5c0e109_f8a0_4fc2_b0bd_51a85de2b725.slice - libcontainer container kubepods-besteffort-podf5c0e109_f8a0_4fc2_b0bd_51a85de2b725.slice. Sep 11 23:31:52.579666 kubelet[2643]: I0911 23:31:52.579518 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2clz\" (UniqueName: \"kubernetes.io/projected/f5c0e109-f8a0-4fc2-b0bd-51a85de2b725-kube-api-access-v2clz\") pod \"calico-typha-6c68fb76d5-c8fkh\" (UID: \"f5c0e109-f8a0-4fc2-b0bd-51a85de2b725\") " pod="calico-system/calico-typha-6c68fb76d5-c8fkh" Sep 11 23:31:52.579666 kubelet[2643]: I0911 23:31:52.579576 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5c0e109-f8a0-4fc2-b0bd-51a85de2b725-tigera-ca-bundle\") pod \"calico-typha-6c68fb76d5-c8fkh\" (UID: \"f5c0e109-f8a0-4fc2-b0bd-51a85de2b725\") " pod="calico-system/calico-typha-6c68fb76d5-c8fkh" Sep 11 23:31:52.579666 kubelet[2643]: I0911 23:31:52.579596 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5c0e109-f8a0-4fc2-b0bd-51a85de2b725-typha-certs\") pod \"calico-typha-6c68fb76d5-c8fkh\" (UID: \"f5c0e109-f8a0-4fc2-b0bd-51a85de2b725\") " pod="calico-system/calico-typha-6c68fb76d5-c8fkh" Sep 11 23:31:52.742907 kubelet[2643]: E0911 23:31:52.742869 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:52.743315 containerd[1535]: time="2025-09-11T23:31:52.743281213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c68fb76d5-c8fkh,Uid:f5c0e109-f8a0-4fc2-b0bd-51a85de2b725,Namespace:calico-system,Attempt:0,}" Sep 11 23:31:52.792177 containerd[1535]: time="2025-09-11T23:31:52.791593684Z" level=info msg="connecting to shim 18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0" address="unix:///run/containerd/s/d67f8517a8180e8d733c5a1b4d43b3bdef90aa5dd6d520772f904f33d107556a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:52.799755 systemd[1]: Created slice kubepods-besteffort-pod88589bce_34e2_4ed0_922a_f88a5d107234.slice - libcontainer container kubepods-besteffort-pod88589bce_34e2_4ed0_922a_f88a5d107234.slice. Sep 11 23:31:52.848397 systemd[1]: Started cri-containerd-18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0.scope - libcontainer container 18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0. Sep 11 23:31:52.916552 containerd[1535]: time="2025-09-11T23:31:52.916497567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c68fb76d5-c8fkh,Uid:f5c0e109-f8a0-4fc2-b0bd-51a85de2b725,Namespace:calico-system,Attempt:0,} returns sandbox id \"18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0\"" Sep 11 23:31:52.919821 kubelet[2643]: E0911 23:31:52.919540 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:52.925185 containerd[1535]: time="2025-09-11T23:31:52.924580498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 23:31:52.982197 kubelet[2643]: I0911 23:31:52.982135 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-xtables-lock\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982197 kubelet[2643]: I0911 23:31:52.982191 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-cni-net-dir\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982366 kubelet[2643]: I0911 23:31:52.982210 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-flexvol-driver-host\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982366 kubelet[2643]: I0911 23:31:52.982247 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-policysync\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982366 kubelet[2643]: I0911 23:31:52.982300 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tclbd\" (UniqueName: \"kubernetes.io/projected/88589bce-34e2-4ed0-922a-f88a5d107234-kube-api-access-tclbd\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982434 kubelet[2643]: I0911 23:31:52.982372 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-cni-bin-dir\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982434 kubelet[2643]: I0911 23:31:52.982391 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-cni-log-dir\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982434 kubelet[2643]: I0911 23:31:52.982412 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-lib-modules\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982434 kubelet[2643]: I0911 23:31:52.982429 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/88589bce-34e2-4ed0-922a-f88a5d107234-node-certs\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982515 kubelet[2643]: I0911 23:31:52.982477 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-var-lib-calico\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982515 kubelet[2643]: I0911 23:31:52.982495 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88589bce-34e2-4ed0-922a-f88a5d107234-tigera-ca-bundle\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:52.982515 kubelet[2643]: I0911 23:31:52.982514 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/88589bce-34e2-4ed0-922a-f88a5d107234-var-run-calico\") pod \"calico-node-8pdgx\" (UID: \"88589bce-34e2-4ed0-922a-f88a5d107234\") " pod="calico-system/calico-node-8pdgx" Sep 11 23:31:53.077674 kubelet[2643]: E0911 23:31:53.077519 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:31:53.084180 kubelet[2643]: E0911 23:31:53.084132 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.084979 kubelet[2643]: W0911 23:31:53.084494 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.088248 kubelet[2643]: E0911 23:31:53.088202 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.088248 kubelet[2643]: W0911 23:31:53.088233 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.088347 kubelet[2643]: E0911 23:31:53.088257 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.088408 kubelet[2643]: E0911 23:31:53.088385 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.088462 kubelet[2643]: E0911 23:31:53.088407 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.088518 kubelet[2643]: W0911 23:31:53.088505 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.088591 kubelet[2643]: E0911 23:31:53.088570 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.090266 kubelet[2643]: E0911 23:31:53.090136 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.090266 kubelet[2643]: W0911 23:31:53.090165 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.090266 kubelet[2643]: E0911 23:31:53.090198 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.090603 kubelet[2643]: E0911 23:31:53.090438 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.090603 kubelet[2643]: W0911 23:31:53.090452 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.090603 kubelet[2643]: E0911 23:31:53.090477 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.090848 kubelet[2643]: E0911 23:31:53.090738 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.090848 kubelet[2643]: W0911 23:31:53.090752 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.090848 kubelet[2643]: E0911 23:31:53.090774 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.091010 kubelet[2643]: E0911 23:31:53.090997 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.091062 kubelet[2643]: W0911 23:31:53.091052 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.091160 kubelet[2643]: E0911 23:31:53.091130 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.091440 kubelet[2643]: E0911 23:31:53.091347 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.091440 kubelet[2643]: W0911 23:31:53.091360 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.091440 kubelet[2643]: E0911 23:31:53.091382 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.091608 kubelet[2643]: E0911 23:31:53.091594 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.091664 kubelet[2643]: W0911 23:31:53.091653 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.091859 kubelet[2643]: E0911 23:31:53.091774 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.091973 kubelet[2643]: E0911 23:31:53.091960 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.092024 kubelet[2643]: W0911 23:31:53.092013 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.092101 kubelet[2643]: E0911 23:31:53.092083 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.092369 kubelet[2643]: E0911 23:31:53.092353 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.092445 kubelet[2643]: W0911 23:31:53.092433 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.092528 kubelet[2643]: E0911 23:31:53.092509 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.093879 kubelet[2643]: E0911 23:31:53.093856 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.094045 kubelet[2643]: W0911 23:31:53.093997 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.094331 kubelet[2643]: E0911 23:31:53.094221 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.094800 kubelet[2643]: E0911 23:31:53.094660 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.094800 kubelet[2643]: W0911 23:31:53.094677 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.094800 kubelet[2643]: E0911 23:31:53.094777 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.095189 kubelet[2643]: E0911 23:31:53.095172 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.095411 kubelet[2643]: W0911 23:31:53.095263 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.095411 kubelet[2643]: E0911 23:31:53.095340 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.095732 kubelet[2643]: E0911 23:31:53.095714 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.095972 kubelet[2643]: W0911 23:31:53.095951 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.096263 kubelet[2643]: E0911 23:31:53.096103 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.096579 kubelet[2643]: E0911 23:31:53.096471 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.096579 kubelet[2643]: W0911 23:31:53.096487 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.096579 kubelet[2643]: E0911 23:31:53.096519 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.096764 kubelet[2643]: E0911 23:31:53.096754 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.096836 kubelet[2643]: W0911 23:31:53.096810 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.096997 kubelet[2643]: E0911 23:31:53.096902 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.097220 kubelet[2643]: E0911 23:31:53.097205 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.097386 kubelet[2643]: W0911 23:31:53.097306 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.097386 kubelet[2643]: E0911 23:31:53.097362 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.097811 kubelet[2643]: E0911 23:31:53.097726 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.097811 kubelet[2643]: W0911 23:31:53.097742 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.097811 kubelet[2643]: E0911 23:31:53.097777 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.098070 kubelet[2643]: E0911 23:31:53.098045 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.098240 kubelet[2643]: W0911 23:31:53.098178 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.098276 kubelet[2643]: E0911 23:31:53.098237 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.098829 kubelet[2643]: E0911 23:31:53.098546 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.098829 kubelet[2643]: W0911 23:31:53.098568 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.098829 kubelet[2643]: E0911 23:31:53.098588 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.099031 kubelet[2643]: E0911 23:31:53.099010 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.099058 kubelet[2643]: W0911 23:31:53.099030 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.099058 kubelet[2643]: E0911 23:31:53.099044 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.099676 kubelet[2643]: E0911 23:31:53.099563 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.099676 kubelet[2643]: W0911 23:31:53.099582 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.099676 kubelet[2643]: E0911 23:31:53.099594 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.099943 kubelet[2643]: E0911 23:31:53.099763 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.099943 kubelet[2643]: W0911 23:31:53.099773 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.099943 kubelet[2643]: E0911 23:31:53.099781 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.099943 kubelet[2643]: E0911 23:31:53.099940 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100121 kubelet[2643]: W0911 23:31:53.099948 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100121 kubelet[2643]: E0911 23:31:53.099956 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.100121 kubelet[2643]: E0911 23:31:53.100080 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100121 kubelet[2643]: W0911 23:31:53.100088 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100121 kubelet[2643]: E0911 23:31:53.100095 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.100578 kubelet[2643]: E0911 23:31:53.100262 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100578 kubelet[2643]: W0911 23:31:53.100270 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100578 kubelet[2643]: E0911 23:31:53.100278 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.100578 kubelet[2643]: E0911 23:31:53.100430 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100578 kubelet[2643]: W0911 23:31:53.100438 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100578 kubelet[2643]: E0911 23:31:53.100445 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.100852 kubelet[2643]: E0911 23:31:53.100652 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100852 kubelet[2643]: W0911 23:31:53.100669 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100852 kubelet[2643]: E0911 23:31:53.100679 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.100852 kubelet[2643]: E0911 23:31:53.100847 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.100852 kubelet[2643]: W0911 23:31:53.100857 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.100852 kubelet[2643]: E0911 23:31:53.100865 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101060 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.101412 kubelet[2643]: W0911 23:31:53.101069 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101077 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101210 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.101412 kubelet[2643]: W0911 23:31:53.101217 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101236 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101384 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.101412 kubelet[2643]: W0911 23:31:53.101392 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.101412 kubelet[2643]: E0911 23:31:53.101401 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101855 kubelet[2643]: E0911 23:31:53.101538 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.101855 kubelet[2643]: W0911 23:31:53.101546 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.101855 kubelet[2643]: E0911 23:31:53.101554 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101855 kubelet[2643]: E0911 23:31:53.101715 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.101855 kubelet[2643]: W0911 23:31:53.101726 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.101855 kubelet[2643]: E0911 23:31:53.101734 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.101855 kubelet[2643]: E0911 23:31:53.101860 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.102541 kubelet[2643]: W0911 23:31:53.101868 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.102541 kubelet[2643]: E0911 23:31:53.101875 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.102541 kubelet[2643]: E0911 23:31:53.102438 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.102541 kubelet[2643]: W0911 23:31:53.102450 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.102541 kubelet[2643]: E0911 23:31:53.102462 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.103397 kubelet[2643]: E0911 23:31:53.103187 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.103397 kubelet[2643]: W0911 23:31:53.103205 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.103397 kubelet[2643]: E0911 23:31:53.103218 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.113957 kubelet[2643]: E0911 23:31:53.113915 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.113957 kubelet[2643]: W0911 23:31:53.113937 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.113957 kubelet[2643]: E0911 23:31:53.113956 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.183917 kubelet[2643]: E0911 23:31:53.183877 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.183917 kubelet[2643]: W0911 23:31:53.183902 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.183917 kubelet[2643]: E0911 23:31:53.183921 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.184098 kubelet[2643]: I0911 23:31:53.183948 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3f9c8214-eabe-4aaf-a4d0-d65795581bdd-registration-dir\") pod \"csi-node-driver-5bhns\" (UID: \"3f9c8214-eabe-4aaf-a4d0-d65795581bdd\") " pod="calico-system/csi-node-driver-5bhns" Sep 11 23:31:53.184211 kubelet[2643]: E0911 23:31:53.184190 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.184211 kubelet[2643]: W0911 23:31:53.184208 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.184273 kubelet[2643]: E0911 23:31:53.184234 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.184407 kubelet[2643]: E0911 23:31:53.184396 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.184407 kubelet[2643]: W0911 23:31:53.184406 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.184463 kubelet[2643]: E0911 23:31:53.184421 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.184597 kubelet[2643]: E0911 23:31:53.184569 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.184597 kubelet[2643]: W0911 23:31:53.184581 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.184597 kubelet[2643]: E0911 23:31:53.184590 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.184670 kubelet[2643]: I0911 23:31:53.184623 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f9c8214-eabe-4aaf-a4d0-d65795581bdd-kubelet-dir\") pod \"csi-node-driver-5bhns\" (UID: \"3f9c8214-eabe-4aaf-a4d0-d65795581bdd\") " pod="calico-system/csi-node-driver-5bhns" Sep 11 23:31:53.184803 kubelet[2643]: E0911 23:31:53.184790 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.184824 kubelet[2643]: W0911 23:31:53.184801 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.184824 kubelet[2643]: E0911 23:31:53.184815 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.184868 kubelet[2643]: I0911 23:31:53.184845 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3f9c8214-eabe-4aaf-a4d0-d65795581bdd-varrun\") pod \"csi-node-driver-5bhns\" (UID: \"3f9c8214-eabe-4aaf-a4d0-d65795581bdd\") " pod="calico-system/csi-node-driver-5bhns" Sep 11 23:31:53.184983 kubelet[2643]: E0911 23:31:53.184968 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185003 kubelet[2643]: W0911 23:31:53.184983 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185003 kubelet[2643]: E0911 23:31:53.184998 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.185163 kubelet[2643]: E0911 23:31:53.185135 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185163 kubelet[2643]: W0911 23:31:53.185152 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185216 kubelet[2643]: E0911 23:31:53.185166 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.185308 kubelet[2643]: E0911 23:31:53.185296 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185308 kubelet[2643]: W0911 23:31:53.185307 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185349 kubelet[2643]: E0911 23:31:53.185318 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.185349 kubelet[2643]: I0911 23:31:53.185337 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ltgm\" (UniqueName: \"kubernetes.io/projected/3f9c8214-eabe-4aaf-a4d0-d65795581bdd-kube-api-access-4ltgm\") pod \"csi-node-driver-5bhns\" (UID: \"3f9c8214-eabe-4aaf-a4d0-d65795581bdd\") " pod="calico-system/csi-node-driver-5bhns" Sep 11 23:31:53.185496 kubelet[2643]: E0911 23:31:53.185483 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185496 kubelet[2643]: W0911 23:31:53.185494 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185545 kubelet[2643]: E0911 23:31:53.185506 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.185545 kubelet[2643]: I0911 23:31:53.185521 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3f9c8214-eabe-4aaf-a4d0-d65795581bdd-socket-dir\") pod \"csi-node-driver-5bhns\" (UID: \"3f9c8214-eabe-4aaf-a4d0-d65795581bdd\") " pod="calico-system/csi-node-driver-5bhns" Sep 11 23:31:53.185674 kubelet[2643]: E0911 23:31:53.185657 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185702 kubelet[2643]: W0911 23:31:53.185672 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185702 kubelet[2643]: E0911 23:31:53.185693 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.185856 kubelet[2643]: E0911 23:31:53.185845 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.185876 kubelet[2643]: W0911 23:31:53.185856 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.185876 kubelet[2643]: E0911 23:31:53.185868 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.186029 kubelet[2643]: E0911 23:31:53.186018 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.186054 kubelet[2643]: W0911 23:31:53.186029 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.186054 kubelet[2643]: E0911 23:31:53.186043 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.186220 kubelet[2643]: E0911 23:31:53.186206 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.186255 kubelet[2643]: W0911 23:31:53.186232 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.186255 kubelet[2643]: E0911 23:31:53.186244 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.186417 kubelet[2643]: E0911 23:31:53.186405 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.186417 kubelet[2643]: W0911 23:31:53.186416 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.186470 kubelet[2643]: E0911 23:31:53.186425 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.186588 kubelet[2643]: E0911 23:31:53.186576 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.186588 kubelet[2643]: W0911 23:31:53.186587 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.186631 kubelet[2643]: E0911 23:31:53.186597 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.286793 kubelet[2643]: E0911 23:31:53.286626 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.286793 kubelet[2643]: W0911 23:31:53.286652 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.286793 kubelet[2643]: E0911 23:31:53.286680 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.287087 kubelet[2643]: E0911 23:31:53.287072 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.287168 kubelet[2643]: W0911 23:31:53.287134 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.287252 kubelet[2643]: E0911 23:31:53.287239 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.287498 kubelet[2643]: E0911 23:31:53.287456 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.287498 kubelet[2643]: W0911 23:31:53.287476 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.287498 kubelet[2643]: E0911 23:31:53.287497 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.287677 kubelet[2643]: E0911 23:31:53.287664 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.287677 kubelet[2643]: W0911 23:31:53.287675 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.287745 kubelet[2643]: E0911 23:31:53.287687 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.287832 kubelet[2643]: E0911 23:31:53.287821 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.287832 kubelet[2643]: W0911 23:31:53.287831 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.287893 kubelet[2643]: E0911 23:31:53.287843 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.288053 kubelet[2643]: E0911 23:31:53.288029 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.288085 kubelet[2643]: W0911 23:31:53.288053 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.288085 kubelet[2643]: E0911 23:31:53.288071 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.288291 kubelet[2643]: E0911 23:31:53.288280 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.288291 kubelet[2643]: W0911 23:31:53.288291 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.288351 kubelet[2643]: E0911 23:31:53.288309 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.288496 kubelet[2643]: E0911 23:31:53.288484 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.288525 kubelet[2643]: W0911 23:31:53.288498 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.288576 kubelet[2643]: E0911 23:31:53.288562 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.288653 kubelet[2643]: E0911 23:31:53.288632 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.288653 kubelet[2643]: W0911 23:31:53.288643 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.288707 kubelet[2643]: E0911 23:31:53.288692 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.288844 kubelet[2643]: E0911 23:31:53.288827 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.288844 kubelet[2643]: W0911 23:31:53.288839 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.288899 kubelet[2643]: E0911 23:31:53.288864 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289048 kubelet[2643]: E0911 23:31:53.289034 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289048 kubelet[2643]: W0911 23:31:53.289045 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289105 kubelet[2643]: E0911 23:31:53.289058 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289207 kubelet[2643]: E0911 23:31:53.289197 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289207 kubelet[2643]: W0911 23:31:53.289207 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289262 kubelet[2643]: E0911 23:31:53.289230 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289381 kubelet[2643]: E0911 23:31:53.289372 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289381 kubelet[2643]: W0911 23:31:53.289381 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289435 kubelet[2643]: E0911 23:31:53.289393 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289561 kubelet[2643]: E0911 23:31:53.289542 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289561 kubelet[2643]: W0911 23:31:53.289552 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289666 kubelet[2643]: E0911 23:31:53.289630 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289746 kubelet[2643]: E0911 23:31:53.289674 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289746 kubelet[2643]: W0911 23:31:53.289682 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289746 kubelet[2643]: E0911 23:31:53.289711 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289832 kubelet[2643]: E0911 23:31:53.289800 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289832 kubelet[2643]: W0911 23:31:53.289807 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.289832 kubelet[2643]: E0911 23:31:53.289820 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.289959 kubelet[2643]: E0911 23:31:53.289945 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.289959 kubelet[2643]: W0911 23:31:53.289955 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.290037 kubelet[2643]: E0911 23:31:53.290024 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.290134 kubelet[2643]: E0911 23:31:53.290124 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.290187 kubelet[2643]: W0911 23:31:53.290134 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.290249 kubelet[2643]: E0911 23:31:53.290234 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.290315 kubelet[2643]: E0911 23:31:53.290298 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.290315 kubelet[2643]: W0911 23:31:53.290312 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.290375 kubelet[2643]: E0911 23:31:53.290339 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.290541 kubelet[2643]: E0911 23:31:53.290527 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.290541 kubelet[2643]: W0911 23:31:53.290539 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.290602 kubelet[2643]: E0911 23:31:53.290551 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.290803 kubelet[2643]: E0911 23:31:53.290788 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.290803 kubelet[2643]: W0911 23:31:53.290801 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.290871 kubelet[2643]: E0911 23:31:53.290814 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.291032 kubelet[2643]: E0911 23:31:53.291019 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.291480 kubelet[2643]: W0911 23:31:53.291036 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.291480 kubelet[2643]: E0911 23:31:53.291052 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.291551 kubelet[2643]: E0911 23:31:53.291491 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.291551 kubelet[2643]: W0911 23:31:53.291503 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.291551 kubelet[2643]: E0911 23:31:53.291521 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.291717 kubelet[2643]: E0911 23:31:53.291703 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.291717 kubelet[2643]: W0911 23:31:53.291715 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.291770 kubelet[2643]: E0911 23:31:53.291732 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.292629 kubelet[2643]: E0911 23:31:53.291904 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.292629 kubelet[2643]: W0911 23:31:53.291917 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.292629 kubelet[2643]: E0911 23:31:53.291925 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.303216 kubelet[2643]: E0911 23:31:53.303188 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:53.303216 kubelet[2643]: W0911 23:31:53.303209 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:53.303357 kubelet[2643]: E0911 23:31:53.303238 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:53.404847 containerd[1535]: time="2025-09-11T23:31:53.404576386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pdgx,Uid:88589bce-34e2-4ed0-922a-f88a5d107234,Namespace:calico-system,Attempt:0,}" Sep 11 23:31:53.429227 containerd[1535]: time="2025-09-11T23:31:53.429144505Z" level=info msg="connecting to shim 770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5" address="unix:///run/containerd/s/c7285a8d2cae3dd504509564bc1b0f026a7c68ff9198f6da6d2bb38c4c8d01e1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:31:53.454432 systemd[1]: Started cri-containerd-770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5.scope - libcontainer container 770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5. Sep 11 23:31:53.476354 containerd[1535]: time="2025-09-11T23:31:53.476316711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pdgx,Uid:88589bce-34e2-4ed0-922a-f88a5d107234,Namespace:calico-system,Attempt:0,} returns sandbox id \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\"" Sep 11 23:31:53.961792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532207911.mount: Deactivated successfully. Sep 11 23:31:54.515244 containerd[1535]: time="2025-09-11T23:31:54.515193926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:54.515703 containerd[1535]: time="2025-09-11T23:31:54.515672724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 11 23:31:54.516446 containerd[1535]: time="2025-09-11T23:31:54.516418042Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:54.518082 containerd[1535]: time="2025-09-11T23:31:54.518056237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:54.518637 containerd[1535]: time="2025-09-11T23:31:54.518606595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.593981177s" Sep 11 23:31:54.518637 containerd[1535]: time="2025-09-11T23:31:54.518635835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 11 23:31:54.519469 containerd[1535]: time="2025-09-11T23:31:54.519445473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 23:31:54.537946 containerd[1535]: time="2025-09-11T23:31:54.537907616Z" level=info msg="CreateContainer within sandbox \"18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 23:31:54.544158 containerd[1535]: time="2025-09-11T23:31:54.544098637Z" level=info msg="Container f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:54.550476 containerd[1535]: time="2025-09-11T23:31:54.550417937Z" level=info msg="CreateContainer within sandbox \"18dc3ec09bdf401c4e6b6f6f66689a5338428df48ff917009ec96407c7e436b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff\"" Sep 11 23:31:54.551405 containerd[1535]: time="2025-09-11T23:31:54.551304855Z" level=info msg="StartContainer for \"f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff\"" Sep 11 23:31:54.553418 containerd[1535]: time="2025-09-11T23:31:54.553367288Z" level=info msg="connecting to shim f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff" address="unix:///run/containerd/s/d67f8517a8180e8d733c5a1b4d43b3bdef90aa5dd6d520772f904f33d107556a" protocol=ttrpc version=3 Sep 11 23:31:54.574327 systemd[1]: Started cri-containerd-f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff.scope - libcontainer container f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff. Sep 11 23:31:54.617365 containerd[1535]: time="2025-09-11T23:31:54.617319612Z" level=info msg="StartContainer for \"f2ec6f5bf2db9e30ab0830e3da5b2cfa4788db508d452abcb6239cf144f6cfff\" returns successfully" Sep 11 23:31:54.787648 kubelet[2643]: E0911 23:31:54.787269 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:31:54.881301 kubelet[2643]: E0911 23:31:54.881267 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:54.892734 kubelet[2643]: I0911 23:31:54.892647 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c68fb76d5-c8fkh" podStartSLOduration=1.293523525 podStartE2EDuration="2.892627444s" podCreationTimestamp="2025-09-11 23:31:52 +0000 UTC" firstStartedPulling="2025-09-11 23:31:52.920215874 +0000 UTC m=+19.229466821" lastFinishedPulling="2025-09-11 23:31:54.519319833 +0000 UTC m=+20.828570740" observedRunningTime="2025-09-11 23:31:54.892465885 +0000 UTC m=+21.201716872" watchObservedRunningTime="2025-09-11 23:31:54.892627444 +0000 UTC m=+21.201878391" Sep 11 23:31:54.916728 kubelet[2643]: E0911 23:31:54.916573 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.916728 kubelet[2643]: W0911 23:31:54.916599 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.916728 kubelet[2643]: E0911 23:31:54.916618 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.916947 kubelet[2643]: E0911 23:31:54.916934 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.917027 kubelet[2643]: W0911 23:31:54.917014 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.917077 kubelet[2643]: E0911 23:31:54.917067 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.917305 kubelet[2643]: E0911 23:31:54.917290 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.917366 kubelet[2643]: W0911 23:31:54.917355 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.917429 kubelet[2643]: E0911 23:31:54.917418 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.917721 kubelet[2643]: E0911 23:31:54.917623 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.917721 kubelet[2643]: W0911 23:31:54.917635 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.917721 kubelet[2643]: E0911 23:31:54.917644 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.917903 kubelet[2643]: E0911 23:31:54.917890 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.917957 kubelet[2643]: W0911 23:31:54.917946 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.918005 kubelet[2643]: E0911 23:31:54.917996 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.918237 kubelet[2643]: E0911 23:31:54.918223 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.918299 kubelet[2643]: W0911 23:31:54.918289 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.918365 kubelet[2643]: E0911 23:31:54.918353 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.918567 kubelet[2643]: E0911 23:31:54.918554 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.918625 kubelet[2643]: W0911 23:31:54.918615 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.918672 kubelet[2643]: E0911 23:31:54.918663 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.918923 kubelet[2643]: E0911 23:31:54.918912 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.918983 kubelet[2643]: W0911 23:31:54.918973 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.919041 kubelet[2643]: E0911 23:31:54.919030 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.919276 kubelet[2643]: E0911 23:31:54.919263 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.919423 kubelet[2643]: W0911 23:31:54.919326 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.919423 kubelet[2643]: E0911 23:31:54.919339 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.919543 kubelet[2643]: E0911 23:31:54.919532 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.919605 kubelet[2643]: W0911 23:31:54.919594 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.919662 kubelet[2643]: E0911 23:31:54.919651 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.919921 kubelet[2643]: E0911 23:31:54.919830 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.919921 kubelet[2643]: W0911 23:31:54.919841 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.919921 kubelet[2643]: E0911 23:31:54.919850 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.920103 kubelet[2643]: E0911 23:31:54.920090 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.920183 kubelet[2643]: W0911 23:31:54.920141 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.920270 kubelet[2643]: E0911 23:31:54.920256 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.920560 kubelet[2643]: E0911 23:31:54.920469 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.920560 kubelet[2643]: W0911 23:31:54.920480 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.920560 kubelet[2643]: E0911 23:31:54.920489 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.920710 kubelet[2643]: E0911 23:31:54.920698 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.920757 kubelet[2643]: W0911 23:31:54.920747 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.920895 kubelet[2643]: E0911 23:31:54.920807 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:54.921004 kubelet[2643]: E0911 23:31:54.920991 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:54.921102 kubelet[2643]: W0911 23:31:54.921090 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:54.921235 kubelet[2643]: E0911 23:31:54.921142 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.000667 kubelet[2643]: E0911 23:31:55.000622 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.000667 kubelet[2643]: W0911 23:31:55.000643 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.000667 kubelet[2643]: E0911 23:31:55.000660 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.000883 kubelet[2643]: E0911 23:31:55.000868 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.000883 kubelet[2643]: W0911 23:31:55.000879 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.000932 kubelet[2643]: E0911 23:31:55.000893 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.001112 kubelet[2643]: E0911 23:31:55.001099 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.001112 kubelet[2643]: W0911 23:31:55.001110 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.001194 kubelet[2643]: E0911 23:31:55.001126 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.001318 kubelet[2643]: E0911 23:31:55.001304 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.001318 kubelet[2643]: W0911 23:31:55.001316 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.001372 kubelet[2643]: E0911 23:31:55.001330 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.001478 kubelet[2643]: E0911 23:31:55.001467 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.001478 kubelet[2643]: W0911 23:31:55.001477 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.001548 kubelet[2643]: E0911 23:31:55.001489 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.001614 kubelet[2643]: E0911 23:31:55.001603 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.001614 kubelet[2643]: W0911 23:31:55.001612 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.001651 kubelet[2643]: E0911 23:31:55.001623 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.001782 kubelet[2643]: E0911 23:31:55.001771 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.001782 kubelet[2643]: W0911 23:31:55.001781 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.001842 kubelet[2643]: E0911 23:31:55.001793 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.002023 kubelet[2643]: E0911 23:31:55.002008 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.002061 kubelet[2643]: W0911 23:31:55.002024 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.002061 kubelet[2643]: E0911 23:31:55.002042 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.002221 kubelet[2643]: E0911 23:31:55.002203 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.002221 kubelet[2643]: W0911 23:31:55.002220 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.002286 kubelet[2643]: E0911 23:31:55.002234 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.002391 kubelet[2643]: E0911 23:31:55.002380 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.002391 kubelet[2643]: W0911 23:31:55.002391 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.002440 kubelet[2643]: E0911 23:31:55.002404 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.002560 kubelet[2643]: E0911 23:31:55.002550 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.002560 kubelet[2643]: W0911 23:31:55.002559 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.002610 kubelet[2643]: E0911 23:31:55.002572 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.002958 kubelet[2643]: E0911 23:31:55.002842 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.002958 kubelet[2643]: W0911 23:31:55.002859 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.002958 kubelet[2643]: E0911 23:31:55.002876 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.003102 kubelet[2643]: E0911 23:31:55.003090 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.003175 kubelet[2643]: W0911 23:31:55.003144 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.003241 kubelet[2643]: E0911 23:31:55.003231 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.003454 kubelet[2643]: E0911 23:31:55.003439 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.003454 kubelet[2643]: W0911 23:31:55.003452 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.003510 kubelet[2643]: E0911 23:31:55.003469 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.003617 kubelet[2643]: E0911 23:31:55.003605 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.003617 kubelet[2643]: W0911 23:31:55.003615 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.003699 kubelet[2643]: E0911 23:31:55.003623 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.003742 kubelet[2643]: E0911 23:31:55.003731 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.003742 kubelet[2643]: W0911 23:31:55.003740 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.003779 kubelet[2643]: E0911 23:31:55.003747 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.003891 kubelet[2643]: E0911 23:31:55.003881 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.003911 kubelet[2643]: W0911 23:31:55.003891 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.003911 kubelet[2643]: E0911 23:31:55.003899 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.004185 kubelet[2643]: E0911 23:31:55.004172 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 23:31:55.004185 kubelet[2643]: W0911 23:31:55.004184 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 23:31:55.004250 kubelet[2643]: E0911 23:31:55.004194 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 23:31:55.659240 containerd[1535]: time="2025-09-11T23:31:55.659181532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:55.659805 containerd[1535]: time="2025-09-11T23:31:55.659776290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 11 23:31:55.660367 containerd[1535]: time="2025-09-11T23:31:55.660347448Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:55.662666 containerd[1535]: time="2025-09-11T23:31:55.662614202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:31:55.663342 containerd[1535]: time="2025-09-11T23:31:55.663027081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.143546008s" Sep 11 23:31:55.663342 containerd[1535]: time="2025-09-11T23:31:55.663061041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 11 23:31:55.665078 containerd[1535]: time="2025-09-11T23:31:55.665043035Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 23:31:55.675001 containerd[1535]: time="2025-09-11T23:31:55.672367734Z" level=info msg="Container db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:31:55.690115 containerd[1535]: time="2025-09-11T23:31:55.690063843Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\"" Sep 11 23:31:55.691346 containerd[1535]: time="2025-09-11T23:31:55.691311799Z" level=info msg="StartContainer for \"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\"" Sep 11 23:31:55.693909 containerd[1535]: time="2025-09-11T23:31:55.693875792Z" level=info msg="connecting to shim db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408" address="unix:///run/containerd/s/c7285a8d2cae3dd504509564bc1b0f026a7c68ff9198f6da6d2bb38c4c8d01e1" protocol=ttrpc version=3 Sep 11 23:31:55.716355 systemd[1]: Started cri-containerd-db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408.scope - libcontainer container db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408. Sep 11 23:31:55.750375 containerd[1535]: time="2025-09-11T23:31:55.750338309Z" level=info msg="StartContainer for \"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\" returns successfully" Sep 11 23:31:55.760182 systemd[1]: cri-containerd-db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408.scope: Deactivated successfully. Sep 11 23:31:55.791257 containerd[1535]: time="2025-09-11T23:31:55.791212911Z" level=info msg="received exit event container_id:\"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\" id:\"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\" pid:3351 exited_at:{seconds:1757633515 nanos:772589405}" Sep 11 23:31:55.792299 containerd[1535]: time="2025-09-11T23:31:55.792266828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\" id:\"db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408\" pid:3351 exited_at:{seconds:1757633515 nanos:772589405}" Sep 11 23:31:55.824375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db2e32e06cabe940aff9635c512ae2cd2c5dfd8002638a9bc040ee7a922ac408-rootfs.mount: Deactivated successfully. Sep 11 23:31:55.884910 kubelet[2643]: I0911 23:31:55.884886 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:31:55.885938 kubelet[2643]: E0911 23:31:55.885631 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:31:56.788168 kubelet[2643]: E0911 23:31:56.788096 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:31:56.890908 containerd[1535]: time="2025-09-11T23:31:56.890869899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 23:31:58.787237 kubelet[2643]: E0911 23:31:58.787169 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:32:00.589822 containerd[1535]: time="2025-09-11T23:32:00.589766349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:00.590657 containerd[1535]: time="2025-09-11T23:32:00.590607547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 11 23:32:00.591630 containerd[1535]: time="2025-09-11T23:32:00.591601665Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:00.594082 containerd[1535]: time="2025-09-11T23:32:00.594031820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:00.595598 containerd[1535]: time="2025-09-11T23:32:00.595557217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.704645958s" Sep 11 23:32:00.595598 containerd[1535]: time="2025-09-11T23:32:00.595596897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 11 23:32:00.602080 containerd[1535]: time="2025-09-11T23:32:00.602010443Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 23:32:00.625016 containerd[1535]: time="2025-09-11T23:32:00.624965235Z" level=info msg="Container dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:00.633325 containerd[1535]: time="2025-09-11T23:32:00.633266978Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\"" Sep 11 23:32:00.636930 containerd[1535]: time="2025-09-11T23:32:00.636869931Z" level=info msg="StartContainer for \"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\"" Sep 11 23:32:00.638781 containerd[1535]: time="2025-09-11T23:32:00.638745927Z" level=info msg="connecting to shim dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743" address="unix:///run/containerd/s/c7285a8d2cae3dd504509564bc1b0f026a7c68ff9198f6da6d2bb38c4c8d01e1" protocol=ttrpc version=3 Sep 11 23:32:00.665349 systemd[1]: Started cri-containerd-dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743.scope - libcontainer container dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743. Sep 11 23:32:00.703915 containerd[1535]: time="2025-09-11T23:32:00.703874551Z" level=info msg="StartContainer for \"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\" returns successfully" Sep 11 23:32:00.787755 kubelet[2643]: E0911 23:32:00.787704 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:32:01.457110 systemd[1]: cri-containerd-dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743.scope: Deactivated successfully. Sep 11 23:32:01.457394 systemd[1]: cri-containerd-dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743.scope: Consumed 472ms CPU time, 176.3M memory peak, 2.9M read from disk, 165.8M written to disk. Sep 11 23:32:01.469091 containerd[1535]: time="2025-09-11T23:32:01.469052373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\" id:\"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\" pid:3411 exited_at:{seconds:1757633521 nanos:468720974}" Sep 11 23:32:01.481162 containerd[1535]: time="2025-09-11T23:32:01.481090109Z" level=info msg="received exit event container_id:\"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\" id:\"dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743\" pid:3411 exited_at:{seconds:1757633521 nanos:468720974}" Sep 11 23:32:01.481503 kubelet[2643]: I0911 23:32:01.481454 2643 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 11 23:32:01.501016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbe48b733051c39215abd0f4e14e019a35d9217ff50ad5226174a3d522904743-rootfs.mount: Deactivated successfully. Sep 11 23:32:01.672289 systemd[1]: Created slice kubepods-burstable-podc0331666_75da_4a66_8211_15dd3dd0b456.slice - libcontainer container kubepods-burstable-podc0331666_75da_4a66_8211_15dd3dd0b456.slice. Sep 11 23:32:01.693683 systemd[1]: Created slice kubepods-besteffort-pod75746b2c_a7a7_4421_b06d_8194cafe093e.slice - libcontainer container kubepods-besteffort-pod75746b2c_a7a7_4421_b06d_8194cafe093e.slice. Sep 11 23:32:01.702708 systemd[1]: Created slice kubepods-burstable-pod1e185599_07f5_454f_8fea_acc323edf3a2.slice - libcontainer container kubepods-burstable-pod1e185599_07f5_454f_8fea_acc323edf3a2.slice. Sep 11 23:32:01.710626 systemd[1]: Created slice kubepods-besteffort-poda89bf952_f3e8_4d3f_a4f3_e05f3974ae84.slice - libcontainer container kubepods-besteffort-poda89bf952_f3e8_4d3f_a4f3_e05f3974ae84.slice. Sep 11 23:32:01.717417 systemd[1]: Created slice kubepods-besteffort-podfc9c5be1_c38d_4ddf_829e_7459b27fd795.slice - libcontainer container kubepods-besteffort-podfc9c5be1_c38d_4ddf_829e_7459b27fd795.slice. Sep 11 23:32:01.725292 systemd[1]: Created slice kubepods-besteffort-pod4c4100f2_36a9_4013_b2ae_8574f73a72c8.slice - libcontainer container kubepods-besteffort-pod4c4100f2_36a9_4013_b2ae_8574f73a72c8.slice. Sep 11 23:32:01.731549 systemd[1]: Created slice kubepods-besteffort-pode49b49a8_da14_4dec_baca_eaf949b44951.slice - libcontainer container kubepods-besteffort-pode49b49a8_da14_4dec_baca_eaf949b44951.slice. Sep 11 23:32:01.746355 kubelet[2643]: I0911 23:32:01.746281 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0331666-75da-4a66-8211-15dd3dd0b456-config-volume\") pod \"coredns-7c65d6cfc9-p95x6\" (UID: \"c0331666-75da-4a66-8211-15dd3dd0b456\") " pod="kube-system/coredns-7c65d6cfc9-p95x6" Sep 11 23:32:01.746355 kubelet[2643]: I0911 23:32:01.746343 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5tq\" (UniqueName: \"kubernetes.io/projected/c0331666-75da-4a66-8211-15dd3dd0b456-kube-api-access-6x5tq\") pod \"coredns-7c65d6cfc9-p95x6\" (UID: \"c0331666-75da-4a66-8211-15dd3dd0b456\") " pod="kube-system/coredns-7c65d6cfc9-p95x6" Sep 11 23:32:01.847890 kubelet[2643]: I0911 23:32:01.847446 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-ca-bundle\") pod \"whisker-67769c6687-gg22k\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " pod="calico-system/whisker-67769c6687-gg22k" Sep 11 23:32:01.847890 kubelet[2643]: I0911 23:32:01.847550 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e185599-07f5-454f-8fea-acc323edf3a2-config-volume\") pod \"coredns-7c65d6cfc9-9dt6t\" (UID: \"1e185599-07f5-454f-8fea-acc323edf3a2\") " pod="kube-system/coredns-7c65d6cfc9-9dt6t" Sep 11 23:32:01.847890 kubelet[2643]: I0911 23:32:01.847595 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8c8n\" (UniqueName: \"kubernetes.io/projected/4c4100f2-36a9-4013-b2ae-8574f73a72c8-kube-api-access-x8c8n\") pod \"calico-apiserver-6fd884bcf7-l7km6\" (UID: \"4c4100f2-36a9-4013-b2ae-8574f73a72c8\") " pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" Sep 11 23:32:01.847890 kubelet[2643]: I0911 23:32:01.847613 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75746b2c-a7a7-4421-b06d-8194cafe093e-tigera-ca-bundle\") pod \"calico-kube-controllers-b4799c5c-5clrv\" (UID: \"75746b2c-a7a7-4421-b06d-8194cafe093e\") " pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" Sep 11 23:32:01.847890 kubelet[2643]: I0911 23:32:01.847633 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89bf952-f3e8-4d3f-a4f3-e05f3974ae84-config\") pod \"goldmane-7988f88666-pxlgq\" (UID: \"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84\") " pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:01.848450 kubelet[2643]: I0911 23:32:01.847672 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr77\" (UniqueName: \"kubernetes.io/projected/a89bf952-f3e8-4d3f-a4f3-e05f3974ae84-kube-api-access-dgr77\") pod \"goldmane-7988f88666-pxlgq\" (UID: \"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84\") " pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:01.848450 kubelet[2643]: I0911 23:32:01.847691 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlrj\" (UniqueName: \"kubernetes.io/projected/fc9c5be1-c38d-4ddf-829e-7459b27fd795-kube-api-access-btlrj\") pod \"whisker-67769c6687-gg22k\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " pod="calico-system/whisker-67769c6687-gg22k" Sep 11 23:32:01.848450 kubelet[2643]: I0911 23:32:01.847708 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c4100f2-36a9-4013-b2ae-8574f73a72c8-calico-apiserver-certs\") pod \"calico-apiserver-6fd884bcf7-l7km6\" (UID: \"4c4100f2-36a9-4013-b2ae-8574f73a72c8\") " pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" Sep 11 23:32:01.848450 kubelet[2643]: I0911 23:32:01.847765 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-backend-key-pair\") pod \"whisker-67769c6687-gg22k\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " pod="calico-system/whisker-67769c6687-gg22k" Sep 11 23:32:01.848450 kubelet[2643]: I0911 23:32:01.847809 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsh4j\" (UniqueName: \"kubernetes.io/projected/75746b2c-a7a7-4421-b06d-8194cafe093e-kube-api-access-nsh4j\") pod \"calico-kube-controllers-b4799c5c-5clrv\" (UID: \"75746b2c-a7a7-4421-b06d-8194cafe093e\") " pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" Sep 11 23:32:01.848636 kubelet[2643]: I0911 23:32:01.847847 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e49b49a8-da14-4dec-baca-eaf949b44951-calico-apiserver-certs\") pod \"calico-apiserver-6fd884bcf7-8fcbf\" (UID: \"e49b49a8-da14-4dec-baca-eaf949b44951\") " pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" Sep 11 23:32:01.848636 kubelet[2643]: I0911 23:32:01.847890 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqs58\" (UniqueName: \"kubernetes.io/projected/e49b49a8-da14-4dec-baca-eaf949b44951-kube-api-access-hqs58\") pod \"calico-apiserver-6fd884bcf7-8fcbf\" (UID: \"e49b49a8-da14-4dec-baca-eaf949b44951\") " pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" Sep 11 23:32:01.848636 kubelet[2643]: I0911 23:32:01.847961 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk8q\" (UniqueName: \"kubernetes.io/projected/1e185599-07f5-454f-8fea-acc323edf3a2-kube-api-access-ggk8q\") pod \"coredns-7c65d6cfc9-9dt6t\" (UID: \"1e185599-07f5-454f-8fea-acc323edf3a2\") " pod="kube-system/coredns-7c65d6cfc9-9dt6t" Sep 11 23:32:01.848636 kubelet[2643]: I0911 23:32:01.847985 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89bf952-f3e8-4d3f-a4f3-e05f3974ae84-goldmane-ca-bundle\") pod \"goldmane-7988f88666-pxlgq\" (UID: \"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84\") " pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:01.848636 kubelet[2643]: I0911 23:32:01.848031 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a89bf952-f3e8-4d3f-a4f3-e05f3974ae84-goldmane-key-pair\") pod \"goldmane-7988f88666-pxlgq\" (UID: \"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84\") " pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:01.904772 containerd[1535]: time="2025-09-11T23:32:01.904727080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 23:32:01.978257 kubelet[2643]: E0911 23:32:01.978140 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:01.979972 containerd[1535]: time="2025-09-11T23:32:01.979872052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p95x6,Uid:c0331666-75da-4a66-8211-15dd3dd0b456,Namespace:kube-system,Attempt:0,}" Sep 11 23:32:01.998528 containerd[1535]: time="2025-09-11T23:32:01.998461496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4799c5c-5clrv,Uid:75746b2c-a7a7-4421-b06d-8194cafe093e,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:02.006708 kubelet[2643]: E0911 23:32:02.006404 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:02.010595 containerd[1535]: time="2025-09-11T23:32:02.010506754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dt6t,Uid:1e185599-07f5-454f-8fea-acc323edf3a2,Namespace:kube-system,Attempt:0,}" Sep 11 23:32:02.021195 containerd[1535]: time="2025-09-11T23:32:02.021110774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-pxlgq,Uid:a89bf952-f3e8-4d3f-a4f3-e05f3974ae84,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:02.022100 containerd[1535]: time="2025-09-11T23:32:02.022007452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67769c6687-gg22k,Uid:fc9c5be1-c38d-4ddf-829e-7459b27fd795,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:02.030336 containerd[1535]: time="2025-09-11T23:32:02.030290317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-l7km6,Uid:4c4100f2-36a9-4013-b2ae-8574f73a72c8,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:32:02.040047 containerd[1535]: time="2025-09-11T23:32:02.040003339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-8fcbf,Uid:e49b49a8-da14-4dec-baca-eaf949b44951,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:32:02.137790 containerd[1535]: time="2025-09-11T23:32:02.137725120Z" level=error msg="Failed to destroy network for sandbox \"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.140315 containerd[1535]: time="2025-09-11T23:32:02.140208635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p95x6,Uid:c0331666-75da-4a66-8211-15dd3dd0b456,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.143000 kubelet[2643]: E0911 23:32:02.142773 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.143000 kubelet[2643]: E0911 23:32:02.142856 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p95x6" Sep 11 23:32:02.143000 kubelet[2643]: E0911 23:32:02.142878 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p95x6" Sep 11 23:32:02.143651 kubelet[2643]: E0911 23:32:02.143241 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p95x6_kube-system(c0331666-75da-4a66-8211-15dd3dd0b456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p95x6_kube-system(c0331666-75da-4a66-8211-15dd3dd0b456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8a25fdbea6695f97073563da734470263b67539565205ce8d4a884788d2600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p95x6" podUID="c0331666-75da-4a66-8211-15dd3dd0b456" Sep 11 23:32:02.146597 containerd[1535]: time="2025-09-11T23:32:02.146538704Z" level=error msg="Failed to destroy network for sandbox \"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.150630 containerd[1535]: time="2025-09-11T23:32:02.150573616Z" level=error msg="Failed to destroy network for sandbox \"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.150750 containerd[1535]: time="2025-09-11T23:32:02.150487016Z" level=error msg="Failed to destroy network for sandbox \"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.151332 containerd[1535]: time="2025-09-11T23:32:02.151277335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-8fcbf,Uid:e49b49a8-da14-4dec-baca-eaf949b44951,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.151556 kubelet[2643]: E0911 23:32:02.151474 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.151556 kubelet[2643]: E0911 23:32:02.151522 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" Sep 11 23:32:02.151556 kubelet[2643]: E0911 23:32:02.151539 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" Sep 11 23:32:02.151656 kubelet[2643]: E0911 23:32:02.151572 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fd884bcf7-8fcbf_calico-apiserver(e49b49a8-da14-4dec-baca-eaf949b44951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fd884bcf7-8fcbf_calico-apiserver(e49b49a8-da14-4dec-baca-eaf949b44951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4595f0fddf2f464dc1f3a4760532efeead4ca90d528a6c463e5959be66f26671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" podUID="e49b49a8-da14-4dec-baca-eaf949b44951" Sep 11 23:32:02.152143 containerd[1535]: time="2025-09-11T23:32:02.152083854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67769c6687-gg22k,Uid:fc9c5be1-c38d-4ddf-829e-7459b27fd795,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.152589 kubelet[2643]: E0911 23:32:02.152451 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.152589 kubelet[2643]: E0911 23:32:02.152499 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67769c6687-gg22k" Sep 11 23:32:02.152589 kubelet[2643]: E0911 23:32:02.152516 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67769c6687-gg22k" Sep 11 23:32:02.152766 kubelet[2643]: E0911 23:32:02.152548 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67769c6687-gg22k_calico-system(fc9c5be1-c38d-4ddf-829e-7459b27fd795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67769c6687-gg22k_calico-system(fc9c5be1-c38d-4ddf-829e-7459b27fd795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84cf35094a302eba0a7269bfaf71ef9520d56c817431298adf93fdec0cd7776b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67769c6687-gg22k" podUID="fc9c5be1-c38d-4ddf-829e-7459b27fd795" Sep 11 23:32:02.153559 containerd[1535]: time="2025-09-11T23:32:02.153254171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dt6t,Uid:1e185599-07f5-454f-8fea-acc323edf3a2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.154351 kubelet[2643]: E0911 23:32:02.153411 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.154351 kubelet[2643]: E0911 23:32:02.153450 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9dt6t" Sep 11 23:32:02.154351 kubelet[2643]: E0911 23:32:02.153465 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9dt6t" Sep 11 23:32:02.154438 kubelet[2643]: E0911 23:32:02.153492 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9dt6t_kube-system(1e185599-07f5-454f-8fea-acc323edf3a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9dt6t_kube-system(1e185599-07f5-454f-8fea-acc323edf3a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97499f2a95ca4d1b701d57d17289aac873379f121792069010400727a83a32ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9dt6t" podUID="1e185599-07f5-454f-8fea-acc323edf3a2" Sep 11 23:32:02.156665 containerd[1535]: time="2025-09-11T23:32:02.156616725Z" level=error msg="Failed to destroy network for sandbox \"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.157920 containerd[1535]: time="2025-09-11T23:32:02.157695883Z" level=error msg="Failed to destroy network for sandbox \"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.157920 containerd[1535]: time="2025-09-11T23:32:02.157815763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4799c5c-5clrv,Uid:75746b2c-a7a7-4421-b06d-8194cafe093e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.159374 kubelet[2643]: E0911 23:32:02.158325 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.159374 kubelet[2643]: E0911 23:32:02.158381 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" Sep 11 23:32:02.159374 kubelet[2643]: E0911 23:32:02.158399 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" Sep 11 23:32:02.159511 containerd[1535]: time="2025-09-11T23:32:02.158715921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-pxlgq,Uid:a89bf952-f3e8-4d3f-a4f3-e05f3974ae84,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.159558 kubelet[2643]: E0911 23:32:02.158447 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b4799c5c-5clrv_calico-system(75746b2c-a7a7-4421-b06d-8194cafe093e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b4799c5c-5clrv_calico-system(75746b2c-a7a7-4421-b06d-8194cafe093e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"827b40f6d90ed246be855f146a779d80abe81b8218d67cae3aa81c04fea70e1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" podUID="75746b2c-a7a7-4421-b06d-8194cafe093e" Sep 11 23:32:02.159558 kubelet[2643]: E0911 23:32:02.159337 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.159558 kubelet[2643]: E0911 23:32:02.159391 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:02.159641 kubelet[2643]: E0911 23:32:02.159406 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-pxlgq" Sep 11 23:32:02.159641 kubelet[2643]: E0911 23:32:02.159436 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-pxlgq_calico-system(a89bf952-f3e8-4d3f-a4f3-e05f3974ae84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-pxlgq_calico-system(a89bf952-f3e8-4d3f-a4f3-e05f3974ae84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"867709d6a2c2cc32f5f9e9f3f26e646a4705c2c8d0c369b30956fbbf370b31e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-pxlgq" podUID="a89bf952-f3e8-4d3f-a4f3-e05f3974ae84" Sep 11 23:32:02.161822 containerd[1535]: time="2025-09-11T23:32:02.161783276Z" level=error msg="Failed to destroy network for sandbox \"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.165312 containerd[1535]: time="2025-09-11T23:32:02.165250589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-l7km6,Uid:4c4100f2-36a9-4013-b2ae-8574f73a72c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.166245 kubelet[2643]: E0911 23:32:02.166085 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.166494 kubelet[2643]: E0911 23:32:02.166219 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" Sep 11 23:32:02.166494 kubelet[2643]: E0911 23:32:02.166373 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" Sep 11 23:32:02.166756 kubelet[2643]: E0911 23:32:02.166587 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fd884bcf7-l7km6_calico-apiserver(4c4100f2-36a9-4013-b2ae-8574f73a72c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fd884bcf7-l7km6_calico-apiserver(4c4100f2-36a9-4013-b2ae-8574f73a72c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8129c7703635576708c54aef07f75d72a1410f1ec278032353da625bef24b312\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" podUID="4c4100f2-36a9-4013-b2ae-8574f73a72c8" Sep 11 23:32:02.792456 systemd[1]: Created slice kubepods-besteffort-pod3f9c8214_eabe_4aaf_a4d0_d65795581bdd.slice - libcontainer container kubepods-besteffort-pod3f9c8214_eabe_4aaf_a4d0_d65795581bdd.slice. Sep 11 23:32:02.794739 containerd[1535]: time="2025-09-11T23:32:02.794703313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhns,Uid:3f9c8214-eabe-4aaf-a4d0-d65795581bdd,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:02.856430 containerd[1535]: time="2025-09-11T23:32:02.856366280Z" level=error msg="Failed to destroy network for sandbox \"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.857460 containerd[1535]: time="2025-09-11T23:32:02.857401518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhns,Uid:3f9c8214-eabe-4aaf-a4d0-d65795581bdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.857683 kubelet[2643]: E0911 23:32:02.857633 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 23:32:02.857940 kubelet[2643]: E0911 23:32:02.857698 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bhns" Sep 11 23:32:02.857940 kubelet[2643]: E0911 23:32:02.857718 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bhns" Sep 11 23:32:02.857940 kubelet[2643]: E0911 23:32:02.857759 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5bhns_calico-system(3f9c8214-eabe-4aaf-a4d0-d65795581bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5bhns_calico-system(3f9c8214-eabe-4aaf-a4d0-d65795581bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e456a41a8d083a733d93e30cc08e7f241979a299bd48efd7896ed8fc547c04a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5bhns" podUID="3f9c8214-eabe-4aaf-a4d0-d65795581bdd" Sep 11 23:32:05.227093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825752102.mount: Deactivated successfully. Sep 11 23:32:05.487306 containerd[1535]: time="2025-09-11T23:32:05.486975824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 11 23:32:05.498031 containerd[1535]: time="2025-09-11T23:32:05.497843167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.593070087s" Sep 11 23:32:05.498031 containerd[1535]: time="2025-09-11T23:32:05.497980047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 11 23:32:05.503516 containerd[1535]: time="2025-09-11T23:32:05.503430439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:05.504620 containerd[1535]: time="2025-09-11T23:32:05.504581477Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:05.506267 containerd[1535]: time="2025-09-11T23:32:05.505775395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:05.520383 containerd[1535]: time="2025-09-11T23:32:05.520337573Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 23:32:05.534610 containerd[1535]: time="2025-09-11T23:32:05.534560712Z" level=info msg="Container 41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:05.538063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689287019.mount: Deactivated successfully. Sep 11 23:32:05.547256 containerd[1535]: time="2025-09-11T23:32:05.547130013Z" level=info msg="CreateContainer within sandbox \"770b463a3889a990fdaee4c25057e70f532fbbf0edfee6c59a6851ffabcc1bb5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\"" Sep 11 23:32:05.549034 containerd[1535]: time="2025-09-11T23:32:05.549000290Z" level=info msg="StartContainer for \"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\"" Sep 11 23:32:05.550928 containerd[1535]: time="2025-09-11T23:32:05.550896047Z" level=info msg="connecting to shim 41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71" address="unix:///run/containerd/s/c7285a8d2cae3dd504509564bc1b0f026a7c68ff9198f6da6d2bb38c4c8d01e1" protocol=ttrpc version=3 Sep 11 23:32:05.575378 systemd[1]: Started cri-containerd-41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71.scope - libcontainer container 41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71. Sep 11 23:32:05.636406 containerd[1535]: time="2025-09-11T23:32:05.636367918Z" level=info msg="StartContainer for \"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\" returns successfully" Sep 11 23:32:05.760782 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 23:32:05.760940 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 23:32:05.970071 kubelet[2643]: I0911 23:32:05.968807 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8pdgx" podStartSLOduration=1.93839657 podStartE2EDuration="13.968790055s" podCreationTimestamp="2025-09-11 23:31:52 +0000 UTC" firstStartedPulling="2025-09-11 23:31:53.477372467 +0000 UTC m=+19.786623414" lastFinishedPulling="2025-09-11 23:32:05.507765992 +0000 UTC m=+31.817016899" observedRunningTime="2025-09-11 23:32:05.968132056 +0000 UTC m=+32.277383003" watchObservedRunningTime="2025-09-11 23:32:05.968790055 +0000 UTC m=+32.278040962" Sep 11 23:32:06.075787 kubelet[2643]: I0911 23:32:06.075738 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btlrj\" (UniqueName: \"kubernetes.io/projected/fc9c5be1-c38d-4ddf-829e-7459b27fd795-kube-api-access-btlrj\") pod \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " Sep 11 23:32:06.075787 kubelet[2643]: I0911 23:32:06.075793 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-backend-key-pair\") pod \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " Sep 11 23:32:06.075964 kubelet[2643]: I0911 23:32:06.075891 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-ca-bundle\") pod \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\" (UID: \"fc9c5be1-c38d-4ddf-829e-7459b27fd795\") " Sep 11 23:32:06.077469 kubelet[2643]: I0911 23:32:06.077429 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fc9c5be1-c38d-4ddf-829e-7459b27fd795" (UID: "fc9c5be1-c38d-4ddf-829e-7459b27fd795"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 11 23:32:06.090291 kubelet[2643]: I0911 23:32:06.090233 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9c5be1-c38d-4ddf-829e-7459b27fd795-kube-api-access-btlrj" (OuterVolumeSpecName: "kube-api-access-btlrj") pod "fc9c5be1-c38d-4ddf-829e-7459b27fd795" (UID: "fc9c5be1-c38d-4ddf-829e-7459b27fd795"). InnerVolumeSpecName "kube-api-access-btlrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 11 23:32:06.090719 kubelet[2643]: I0911 23:32:06.090676 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fc9c5be1-c38d-4ddf-829e-7459b27fd795" (UID: "fc9c5be1-c38d-4ddf-829e-7459b27fd795"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 11 23:32:06.128374 containerd[1535]: time="2025-09-11T23:32:06.128332105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\" id:\"5bc5e29a512214f899368353461d22fac72d554309ac2dab1f847fc7575509d9\" pid:3786 exit_status:1 exited_at:{seconds:1757633526 nanos:127782346}" Sep 11 23:32:06.176835 kubelet[2643]: I0911 23:32:06.176762 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btlrj\" (UniqueName: \"kubernetes.io/projected/fc9c5be1-c38d-4ddf-829e-7459b27fd795-kube-api-access-btlrj\") on node \"localhost\" DevicePath \"\"" Sep 11 23:32:06.176835 kubelet[2643]: I0911 23:32:06.176802 2643 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 23:32:06.176835 kubelet[2643]: I0911 23:32:06.176811 2643 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9c5be1-c38d-4ddf-829e-7459b27fd795-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 23:32:06.227960 systemd[1]: var-lib-kubelet-pods-fc9c5be1\x2dc38d\x2d4ddf\x2d829e\x2d7459b27fd795-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtlrj.mount: Deactivated successfully. Sep 11 23:32:06.228064 systemd[1]: var-lib-kubelet-pods-fc9c5be1\x2dc38d\x2d4ddf\x2d829e\x2d7459b27fd795-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 23:32:06.943186 systemd[1]: Removed slice kubepods-besteffort-podfc9c5be1_c38d_4ddf_829e_7459b27fd795.slice - libcontainer container kubepods-besteffort-podfc9c5be1_c38d_4ddf_829e_7459b27fd795.slice. Sep 11 23:32:07.005384 systemd[1]: Created slice kubepods-besteffort-pod602991a4_7d47_4aee_ba40_a1040bbcd059.slice - libcontainer container kubepods-besteffort-pod602991a4_7d47_4aee_ba40_a1040bbcd059.slice. Sep 11 23:32:07.058424 containerd[1535]: time="2025-09-11T23:32:07.057456912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\" id:\"66c2e25765ca2d81000f3c39eb26e2c48ae6b2956451150b3d3d2d33093a4eaa\" pid:3825 exit_status:1 exited_at:{seconds:1757633527 nanos:56978073}" Sep 11 23:32:07.082747 kubelet[2643]: I0911 23:32:07.082692 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2zn\" (UniqueName: \"kubernetes.io/projected/602991a4-7d47-4aee-ba40-a1040bbcd059-kube-api-access-mv2zn\") pod \"whisker-5d4cc5f966-vzr4s\" (UID: \"602991a4-7d47-4aee-ba40-a1040bbcd059\") " pod="calico-system/whisker-5d4cc5f966-vzr4s" Sep 11 23:32:07.083116 kubelet[2643]: I0911 23:32:07.082767 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/602991a4-7d47-4aee-ba40-a1040bbcd059-whisker-backend-key-pair\") pod \"whisker-5d4cc5f966-vzr4s\" (UID: \"602991a4-7d47-4aee-ba40-a1040bbcd059\") " pod="calico-system/whisker-5d4cc5f966-vzr4s" Sep 11 23:32:07.083116 kubelet[2643]: I0911 23:32:07.082795 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/602991a4-7d47-4aee-ba40-a1040bbcd059-whisker-ca-bundle\") pod \"whisker-5d4cc5f966-vzr4s\" (UID: \"602991a4-7d47-4aee-ba40-a1040bbcd059\") " pod="calico-system/whisker-5d4cc5f966-vzr4s" Sep 11 23:32:07.310506 containerd[1535]: time="2025-09-11T23:32:07.310376136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4cc5f966-vzr4s,Uid:602991a4-7d47-4aee-ba40-a1040bbcd059,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:07.567269 systemd-networkd[1458]: caliefbaf2b4e8d: Link UP Sep 11 23:32:07.568777 systemd-networkd[1458]: caliefbaf2b4e8d: Gained carrier Sep 11 23:32:07.583346 containerd[1535]: 2025-09-11 23:32:07.409 [INFO][3941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 23:32:07.583346 containerd[1535]: 2025-09-11 23:32:07.442 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0 whisker-5d4cc5f966- calico-system 602991a4-7d47-4aee-ba40-a1040bbcd059 878 0 2025-09-11 23:32:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d4cc5f966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d4cc5f966-vzr4s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliefbaf2b4e8d [] [] }} ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-" Sep 11 23:32:07.583346 containerd[1535]: 2025-09-11 23:32:07.442 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.583346 containerd[1535]: 2025-09-11 23:32:07.512 [INFO][3956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" HandleID="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Workload="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.512 [INFO][3956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" HandleID="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Workload="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050e710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d4cc5f966-vzr4s", "timestamp":"2025-09-11 23:32:07.512452747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.512 [INFO][3956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.512 [INFO][3956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.512 [INFO][3956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.523 [INFO][3956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" host="localhost" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.534 [INFO][3956] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.539 [INFO][3956] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.541 [INFO][3956] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.544 [INFO][3956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:07.583579 containerd[1535]: 2025-09-11 23:32:07.544 [INFO][3956] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" host="localhost" Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.546 [INFO][3956] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95 Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.550 [INFO][3956] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" host="localhost" Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.555 [INFO][3956] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" host="localhost" Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.555 [INFO][3956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" host="localhost" Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.555 [INFO][3956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:07.584016 containerd[1535]: 2025-09-11 23:32:07.555 [INFO][3956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" HandleID="k8s-pod-network.0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Workload="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.584416 containerd[1535]: 2025-09-11 23:32:07.559 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0", GenerateName:"whisker-5d4cc5f966-", Namespace:"calico-system", SelfLink:"", UID:"602991a4-7d47-4aee-ba40-a1040bbcd059", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4cc5f966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d4cc5f966-vzr4s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliefbaf2b4e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:07.584416 containerd[1535]: 2025-09-11 23:32:07.559 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.584612 containerd[1535]: 2025-09-11 23:32:07.560 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefbaf2b4e8d ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.584612 containerd[1535]: 2025-09-11 23:32:07.568 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.584682 containerd[1535]: 2025-09-11 23:32:07.569 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0", GenerateName:"whisker-5d4cc5f966-", Namespace:"calico-system", SelfLink:"", UID:"602991a4-7d47-4aee-ba40-a1040bbcd059", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d4cc5f966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95", Pod:"whisker-5d4cc5f966-vzr4s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliefbaf2b4e8d", MAC:"12:79:5d:af:1a:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:07.584788 containerd[1535]: 2025-09-11 23:32:07.580 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" Namespace="calico-system" Pod="whisker-5d4cc5f966-vzr4s" WorkloadEndpoint="localhost-k8s-whisker--5d4cc5f966--vzr4s-eth0" Sep 11 23:32:07.624944 containerd[1535]: time="2025-09-11T23:32:07.624897478Z" level=info msg="connecting to shim 0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95" address="unix:///run/containerd/s/e3bc919da495c8c9c14b9345fff33c11dad6d6c15782571bb5e8db53689c4063" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:07.664375 systemd[1]: Started cri-containerd-0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95.scope - libcontainer container 0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95. Sep 11 23:32:07.676609 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:07.699005 containerd[1535]: time="2025-09-11T23:32:07.698942099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d4cc5f966-vzr4s,Uid:602991a4-7d47-4aee-ba40-a1040bbcd059,Namespace:calico-system,Attempt:0,} returns sandbox id \"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95\"" Sep 11 23:32:07.701128 containerd[1535]: time="2025-09-11T23:32:07.701059456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 23:32:07.790341 kubelet[2643]: I0911 23:32:07.790283 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9c5be1-c38d-4ddf-829e-7459b27fd795" path="/var/lib/kubelet/pods/fc9c5be1-c38d-4ddf-829e-7459b27fd795/volumes" Sep 11 23:32:08.538609 containerd[1535]: time="2025-09-11T23:32:08.538551187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:08.539187 containerd[1535]: time="2025-09-11T23:32:08.539117946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 11 23:32:08.540005 containerd[1535]: time="2025-09-11T23:32:08.539973705Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:08.542106 containerd[1535]: time="2025-09-11T23:32:08.542044063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:08.542832 containerd[1535]: time="2025-09-11T23:32:08.542669502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 841.329846ms" Sep 11 23:32:08.542832 containerd[1535]: time="2025-09-11T23:32:08.542717342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 11 23:32:08.545930 containerd[1535]: time="2025-09-11T23:32:08.545881538Z" level=info msg="CreateContainer within sandbox \"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 23:32:08.554178 containerd[1535]: time="2025-09-11T23:32:08.554047008Z" level=info msg="Container d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:08.564546 containerd[1535]: time="2025-09-11T23:32:08.564485795Z" level=info msg="CreateContainer within sandbox \"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858\"" Sep 11 23:32:08.565699 containerd[1535]: time="2025-09-11T23:32:08.565596233Z" level=info msg="StartContainer for \"d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858\"" Sep 11 23:32:08.566995 containerd[1535]: time="2025-09-11T23:32:08.566960872Z" level=info msg="connecting to shim d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858" address="unix:///run/containerd/s/e3bc919da495c8c9c14b9345fff33c11dad6d6c15782571bb5e8db53689c4063" protocol=ttrpc version=3 Sep 11 23:32:08.588599 systemd[1]: Started cri-containerd-d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858.scope - libcontainer container d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858. Sep 11 23:32:08.624572 containerd[1535]: time="2025-09-11T23:32:08.624530840Z" level=info msg="StartContainer for \"d99ddce8082ebe4acc981bd3041fade52e013f1ba5d7e0a77f797d785a20c858\" returns successfully" Sep 11 23:32:08.626337 containerd[1535]: time="2025-09-11T23:32:08.625997918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 23:32:09.368325 systemd-networkd[1458]: caliefbaf2b4e8d: Gained IPv6LL Sep 11 23:32:10.007404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510533799.mount: Deactivated successfully. Sep 11 23:32:10.065008 containerd[1535]: time="2025-09-11T23:32:10.064950452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:10.066271 containerd[1535]: time="2025-09-11T23:32:10.066234130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 11 23:32:10.067184 containerd[1535]: time="2025-09-11T23:32:10.067133609Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:10.069833 containerd[1535]: time="2025-09-11T23:32:10.069771206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:10.070644 containerd[1535]: time="2025-09-11T23:32:10.070364526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.444329448s" Sep 11 23:32:10.070644 containerd[1535]: time="2025-09-11T23:32:10.070403366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 11 23:32:10.073402 containerd[1535]: time="2025-09-11T23:32:10.073329202Z" level=info msg="CreateContainer within sandbox \"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 23:32:10.081364 containerd[1535]: time="2025-09-11T23:32:10.080731034Z" level=info msg="Container 53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:10.090048 containerd[1535]: time="2025-09-11T23:32:10.089898064Z" level=info msg="CreateContainer within sandbox \"0356607e69a622895b35fe39f369f4535a3ef7ce28a7c2b278e3e0ebc36bfc95\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98\"" Sep 11 23:32:10.090436 containerd[1535]: time="2025-09-11T23:32:10.090409984Z" level=info msg="StartContainer for \"53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98\"" Sep 11 23:32:10.093178 containerd[1535]: time="2025-09-11T23:32:10.093120661Z" level=info msg="connecting to shim 53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98" address="unix:///run/containerd/s/e3bc919da495c8c9c14b9345fff33c11dad6d6c15782571bb5e8db53689c4063" protocol=ttrpc version=3 Sep 11 23:32:10.114372 systemd[1]: Started cri-containerd-53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98.scope - libcontainer container 53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98. Sep 11 23:32:10.157808 containerd[1535]: time="2025-09-11T23:32:10.157725350Z" level=info msg="StartContainer for \"53ac3d9203a19a8e18eb1c73961e9e52d121c7b2df2efedc3e8d7b8fa182ef98\" returns successfully" Sep 11 23:32:10.956172 kubelet[2643]: I0911 23:32:10.956000 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d4cc5f966-vzr4s" podStartSLOduration=2.585204487 podStartE2EDuration="4.955977875s" podCreationTimestamp="2025-09-11 23:32:06 +0000 UTC" firstStartedPulling="2025-09-11 23:32:07.700615857 +0000 UTC m=+34.009866804" lastFinishedPulling="2025-09-11 23:32:10.071389245 +0000 UTC m=+36.380640192" observedRunningTime="2025-09-11 23:32:10.954938116 +0000 UTC m=+37.264189063" watchObservedRunningTime="2025-09-11 23:32:10.955977875 +0000 UTC m=+37.265228862" Sep 11 23:32:12.242439 kubelet[2643]: I0911 23:32:12.242390 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:32:12.242792 kubelet[2643]: E0911 23:32:12.242729 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:12.776521 systemd-networkd[1458]: vxlan.calico: Link UP Sep 11 23:32:12.776527 systemd-networkd[1458]: vxlan.calico: Gained carrier Sep 11 23:32:12.787577 kubelet[2643]: E0911 23:32:12.787478 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:12.789204 containerd[1535]: time="2025-09-11T23:32:12.788317760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p95x6,Uid:c0331666-75da-4a66-8211-15dd3dd0b456,Namespace:kube-system,Attempt:0,}" Sep 11 23:32:12.932843 systemd-networkd[1458]: cali73dcfe31c13: Link UP Sep 11 23:32:12.932991 systemd-networkd[1458]: cali73dcfe31c13: Gained carrier Sep 11 23:32:12.948386 containerd[1535]: 2025-09-11 23:32:12.857 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0 coredns-7c65d6cfc9- kube-system c0331666-75da-4a66-8211-15dd3dd0b456 804 0 2025-09-11 23:31:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-p95x6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73dcfe31c13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-" Sep 11 23:32:12.948386 containerd[1535]: 2025-09-11 23:32:12.857 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.948386 containerd[1535]: 2025-09-11 23:32:12.885 [INFO][4317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" HandleID="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.885 [INFO][4317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" HandleID="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-p95x6", "timestamp":"2025-09-11 23:32:12.885752386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.886 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.886 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.886 [INFO][4317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.898 [INFO][4317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" host="localhost" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.903 [INFO][4317] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.907 [INFO][4317] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.910 [INFO][4317] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.912 [INFO][4317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:12.948593 containerd[1535]: 2025-09-11 23:32:12.912 [INFO][4317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" host="localhost" Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.914 [INFO][4317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.919 [INFO][4317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" host="localhost" Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.925 [INFO][4317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" host="localhost" Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.925 [INFO][4317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" host="localhost" Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.925 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:12.948785 containerd[1535]: 2025-09-11 23:32:12.925 [INFO][4317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" HandleID="k8s-pod-network.a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.948893 containerd[1535]: 2025-09-11 23:32:12.929 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0331666-75da-4a66-8211-15dd3dd0b456", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-p95x6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73dcfe31c13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:12.948956 containerd[1535]: 2025-09-11 23:32:12.929 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.948956 containerd[1535]: 2025-09-11 23:32:12.930 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73dcfe31c13 ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.948956 containerd[1535]: 2025-09-11 23:32:12.932 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.949026 containerd[1535]: 2025-09-11 23:32:12.933 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c0331666-75da-4a66-8211-15dd3dd0b456", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e", Pod:"coredns-7c65d6cfc9-p95x6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73dcfe31c13", MAC:"8e:e7:a5:93:36:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:12.949026 containerd[1535]: 2025-09-11 23:32:12.944 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p95x6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p95x6-eth0" Sep 11 23:32:12.950697 kubelet[2643]: E0911 23:32:12.950671 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:12.973736 containerd[1535]: time="2025-09-11T23:32:12.973690702Z" level=info msg="connecting to shim a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e" address="unix:///run/containerd/s/0594cb01836ec3d7243f3398b90c21c5ef3f4c6d240faef583b5d4b9fb669373" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:12.999367 systemd[1]: Started cri-containerd-a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e.scope - libcontainer container a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e. Sep 11 23:32:13.022080 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:13.048749 containerd[1535]: time="2025-09-11T23:32:13.048613592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p95x6,Uid:c0331666-75da-4a66-8211-15dd3dd0b456,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e\"" Sep 11 23:32:13.050097 kubelet[2643]: E0911 23:32:13.050025 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:13.053732 containerd[1535]: time="2025-09-11T23:32:13.053689268Z" level=info msg="CreateContainer within sandbox \"a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:32:13.068899 containerd[1535]: time="2025-09-11T23:32:13.068848454Z" level=info msg="Container 87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:13.072647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259705917.mount: Deactivated successfully. Sep 11 23:32:13.076606 containerd[1535]: time="2025-09-11T23:32:13.076561447Z" level=info msg="CreateContainer within sandbox \"a2f87952f4306da1eea57dbe3397ff457b1979df2d18ceb79cfb4c6514b43f6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d\"" Sep 11 23:32:13.077112 containerd[1535]: time="2025-09-11T23:32:13.077085367Z" level=info msg="StartContainer for \"87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d\"" Sep 11 23:32:13.078020 containerd[1535]: time="2025-09-11T23:32:13.077985646Z" level=info msg="connecting to shim 87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d" address="unix:///run/containerd/s/0594cb01836ec3d7243f3398b90c21c5ef3f4c6d240faef583b5d4b9fb669373" protocol=ttrpc version=3 Sep 11 23:32:13.103370 systemd[1]: Started cri-containerd-87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d.scope - libcontainer container 87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d. Sep 11 23:32:13.171206 containerd[1535]: time="2025-09-11T23:32:13.171137922Z" level=info msg="StartContainer for \"87ad6b99bbdda51536f5436934730d77c7a91b6f5906b657daec4dfa022d2b3d\" returns successfully" Sep 11 23:32:13.955283 kubelet[2643]: E0911 23:32:13.955208 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:13.967080 kubelet[2643]: I0911 23:32:13.966729 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p95x6" podStartSLOduration=32.966708924 podStartE2EDuration="32.966708924s" podCreationTimestamp="2025-09-11 23:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:32:13.965959244 +0000 UTC m=+40.275210191" watchObservedRunningTime="2025-09-11 23:32:13.966708924 +0000 UTC m=+40.275959871" Sep 11 23:32:14.168321 systemd-networkd[1458]: cali73dcfe31c13: Gained IPv6LL Sep 11 23:32:14.424349 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL Sep 11 23:32:14.788094 kubelet[2643]: E0911 23:32:14.788044 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:14.788703 containerd[1535]: time="2025-09-11T23:32:14.788528306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dt6t,Uid:1e185599-07f5-454f-8fea-acc323edf3a2,Namespace:kube-system,Attempt:0,}" Sep 11 23:32:14.789297 containerd[1535]: time="2025-09-11T23:32:14.788850106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhns,Uid:3f9c8214-eabe-4aaf-a4d0-d65795581bdd,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:14.956090 kubelet[2643]: E0911 23:32:14.956036 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:15.088169 systemd-networkd[1458]: cali19692428129: Link UP Sep 11 23:32:15.091657 systemd-networkd[1458]: cali19692428129: Gained carrier Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:14.976 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5bhns-eth0 csi-node-driver- calico-system 3f9c8214-eabe-4aaf-a4d0-d65795581bdd 703 0 2025-09-11 23:31:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5bhns eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19692428129 [] [] }} ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:14.976 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.014 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" HandleID="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Workload="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.014 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" HandleID="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Workload="localhost-k8s-csi--node--driver--5bhns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5bhns", "timestamp":"2025-09-11 23:32:15.014674155 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.014 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.014 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.015 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.044 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.050 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.057 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.059 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.062 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.062 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.064 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.070 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" host="localhost" Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:15.111824 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" HandleID="k8s-pod-network.d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Workload="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.081 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bhns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f9c8214-eabe-4aaf-a4d0-d65795581bdd", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5bhns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19692428129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.081 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.081 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19692428129 ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.094 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.094 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bhns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3f9c8214-eabe-4aaf-a4d0-d65795581bdd", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede", Pod:"csi-node-driver-5bhns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19692428129", MAC:"62:f9:3b:a6:10:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.112514 containerd[1535]: 2025-09-11 23:32:15.106 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" Namespace="calico-system" Pod="csi-node-driver-5bhns" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bhns-eth0" Sep 11 23:32:15.137722 containerd[1535]: time="2025-09-11T23:32:15.137638178Z" level=info msg="connecting to shim d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede" address="unix:///run/containerd/s/91fb73fc91251aaf0e7c78b49de1ecd494c7269b804a72cba7701e39cdba17b7" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:15.176450 systemd[1]: Started cri-containerd-d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede.scope - libcontainer container d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede. Sep 11 23:32:15.196987 systemd-networkd[1458]: calicc132275d8e: Link UP Sep 11 23:32:15.197641 systemd-networkd[1458]: calicc132275d8e: Gained carrier Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:14.978 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0 coredns-7c65d6cfc9- kube-system 1e185599-07f5-454f-8fea-acc323edf3a2 811 0 2025-09-11 23:31:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-9dt6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc132275d8e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:14.979 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.015 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" HandleID="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Workload="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.016 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" HandleID="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Workload="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-9dt6t", "timestamp":"2025-09-11 23:32:15.015937554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.016 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.077 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.147 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.157 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.163 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.167 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.174 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.174 [INFO][4493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.177 [INFO][4493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.182 [INFO][4493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.189 [INFO][4493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.189 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" host="localhost" Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.189 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:15.228501 containerd[1535]: 2025-09-11 23:32:15.189 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" HandleID="k8s-pod-network.93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Workload="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.194 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e185599-07f5-454f-8fea-acc323edf3a2", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-9dt6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc132275d8e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.194 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.194 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc132275d8e ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.197 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.199 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e185599-07f5-454f-8fea-acc323edf3a2", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f", Pod:"coredns-7c65d6cfc9-9dt6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc132275d8e", MAC:"ae:7f:78:e2:7f:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.229413 containerd[1535]: 2025-09-11 23:32:15.217 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9dt6t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9dt6t-eth0" Sep 11 23:32:15.267445 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:15.277174 containerd[1535]: time="2025-09-11T23:32:15.275305428Z" level=info msg="connecting to shim 93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f" address="unix:///run/containerd/s/a3b09629fa6479ee61cc3520aa8a974f17521db787f3bf9b5f9a68c664eeaf83" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:15.318401 systemd[1]: Started cri-containerd-93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f.scope - libcontainer container 93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f. Sep 11 23:32:15.334228 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:15.340011 containerd[1535]: time="2025-09-11T23:32:15.339891377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bhns,Uid:3f9c8214-eabe-4aaf-a4d0-d65795581bdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede\"" Sep 11 23:32:15.343576 containerd[1535]: time="2025-09-11T23:32:15.343516094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 23:32:15.363001 containerd[1535]: time="2025-09-11T23:32:15.362960879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9dt6t,Uid:1e185599-07f5-454f-8fea-acc323edf3a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f\"" Sep 11 23:32:15.364510 kubelet[2643]: E0911 23:32:15.364446 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:15.367162 containerd[1535]: time="2025-09-11T23:32:15.367088676Z" level=info msg="CreateContainer within sandbox \"93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 23:32:15.386272 containerd[1535]: time="2025-09-11T23:32:15.385606341Z" level=info msg="Container 382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:15.391082 containerd[1535]: time="2025-09-11T23:32:15.391034937Z" level=info msg="CreateContainer within sandbox \"93fd82822debdc709a9934a419763573610c111279b7009452d92e60b6abb43f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974\"" Sep 11 23:32:15.392033 containerd[1535]: time="2025-09-11T23:32:15.391991936Z" level=info msg="StartContainer for \"382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974\"" Sep 11 23:32:15.392033 containerd[1535]: time="2025-09-11T23:32:15.393280335Z" level=info msg="connecting to shim 382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974" address="unix:///run/containerd/s/a3b09629fa6479ee61cc3520aa8a974f17521db787f3bf9b5f9a68c664eeaf83" protocol=ttrpc version=3 Sep 11 23:32:15.417507 systemd[1]: Started cri-containerd-382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974.scope - libcontainer container 382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974. Sep 11 23:32:15.444294 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:48158.service - OpenSSH per-connection server daemon (10.0.0.1:48158). Sep 11 23:32:15.463843 containerd[1535]: time="2025-09-11T23:32:15.462515920Z" level=info msg="StartContainer for \"382a6227178a83b52ceb64d2a44bc5d43b0ce4fd6d5b9329bcf13293d8988974\" returns successfully" Sep 11 23:32:15.521986 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 48158 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:15.523668 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:15.529645 systemd-logind[1506]: New session 8 of user core. Sep 11 23:32:15.538359 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 23:32:15.740846 sshd[4655]: Connection closed by 10.0.0.1 port 48158 Sep 11 23:32:15.740680 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:15.744734 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:48158.service: Deactivated successfully. Sep 11 23:32:15.746733 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 23:32:15.747563 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. Sep 11 23:32:15.748700 systemd-logind[1506]: Removed session 8. Sep 11 23:32:15.788731 containerd[1535]: time="2025-09-11T23:32:15.788437701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-8fcbf,Uid:e49b49a8-da14-4dec-baca-eaf949b44951,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:32:15.788731 containerd[1535]: time="2025-09-11T23:32:15.788672061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-l7km6,Uid:4c4100f2-36a9-4013-b2ae-8574f73a72c8,Namespace:calico-apiserver,Attempt:0,}" Sep 11 23:32:15.921978 systemd-networkd[1458]: cali6789ad4a746: Link UP Sep 11 23:32:15.922480 systemd-networkd[1458]: cali6789ad4a746: Gained carrier Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.831 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0 calico-apiserver-6fd884bcf7- calico-apiserver 4c4100f2-36a9-4013-b2ae-8574f73a72c8 814 0 2025-09-11 23:31:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fd884bcf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fd884bcf7-l7km6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6789ad4a746 [] [] }} ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.831 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.865 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" HandleID="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.865 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" HandleID="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fd884bcf7-l7km6", "timestamp":"2025-09-11 23:32:15.8656936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.865 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.865 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.865 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.878 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.884 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.895 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.897 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.900 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.900 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.902 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.907 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" host="localhost" Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:15.950683 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" HandleID="k8s-pod-network.d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.918 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0", GenerateName:"calico-apiserver-6fd884bcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c4100f2-36a9-4013-b2ae-8574f73a72c8", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fd884bcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fd884bcf7-l7km6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6789ad4a746", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.918 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.918 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6789ad4a746 ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.923 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.925 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0", GenerateName:"calico-apiserver-6fd884bcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c4100f2-36a9-4013-b2ae-8574f73a72c8", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fd884bcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c", Pod:"calico-apiserver-6fd884bcf7-l7km6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6789ad4a746", MAC:"9e:30:f2:c5:c5:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:15.952106 containerd[1535]: 2025-09-11 23:32:15.946 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-l7km6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--l7km6-eth0" Sep 11 23:32:15.963505 kubelet[2643]: E0911 23:32:15.963431 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:15.963505 kubelet[2643]: E0911 23:32:15.963465 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:15.979467 kubelet[2643]: I0911 23:32:15.979263 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9dt6t" podStartSLOduration=34.97924031 podStartE2EDuration="34.97924031s" podCreationTimestamp="2025-09-11 23:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 23:32:15.978075751 +0000 UTC m=+42.287326698" watchObservedRunningTime="2025-09-11 23:32:15.97924031 +0000 UTC m=+42.288491417" Sep 11 23:32:15.998272 containerd[1535]: time="2025-09-11T23:32:15.997964415Z" level=info msg="connecting to shim d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c" address="unix:///run/containerd/s/54d8b8a8364e87968b3207702201a5eba16e7cd6c31a834ea2bd4e65f1fe5083" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:16.026530 systemd[1]: Started cri-containerd-d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c.scope - libcontainer container d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c. Sep 11 23:32:16.035063 systemd-networkd[1458]: calidc9a8a296b9: Link UP Sep 11 23:32:16.035894 systemd-networkd[1458]: calidc9a8a296b9: Gained carrier Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.835 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0 calico-apiserver-6fd884bcf7- calico-apiserver e49b49a8-da14-4dec-baca-eaf949b44951 812 0 2025-09-11 23:31:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fd884bcf7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fd884bcf7-8fcbf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc9a8a296b9 [] [] }} ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.836 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.873 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" HandleID="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.873 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" HandleID="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000435190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fd884bcf7-8fcbf", "timestamp":"2025-09-11 23:32:15.873311994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.873 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.913 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.985 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:15.995 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.003 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.006 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.010 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.010 [INFO][4710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.013 [INFO][4710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2 Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.019 [INFO][4710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.028 [INFO][4710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.028 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" host="localhost" Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.028 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:16.052477 containerd[1535]: 2025-09-11 23:32:16.028 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" HandleID="k8s-pod-network.d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Workload="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.031 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0", GenerateName:"calico-apiserver-6fd884bcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49b49a8-da14-4dec-baca-eaf949b44951", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fd884bcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fd884bcf7-8fcbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc9a8a296b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.031 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.031 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc9a8a296b9 ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.036 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.036 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0", GenerateName:"calico-apiserver-6fd884bcf7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49b49a8-da14-4dec-baca-eaf949b44951", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fd884bcf7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2", Pod:"calico-apiserver-6fd884bcf7-8fcbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc9a8a296b9", MAC:"02:24:05:d1:1f:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:16.053097 containerd[1535]: 2025-09-11 23:32:16.049 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" Namespace="calico-apiserver" Pod="calico-apiserver-6fd884bcf7-8fcbf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fd884bcf7--8fcbf-eth0" Sep 11 23:32:16.058488 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:16.087991 containerd[1535]: time="2025-09-11T23:32:16.087951068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-l7km6,Uid:4c4100f2-36a9-4013-b2ae-8574f73a72c8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c\"" Sep 11 23:32:16.090970 containerd[1535]: time="2025-09-11T23:32:16.090920506Z" level=info msg="connecting to shim d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2" address="unix:///run/containerd/s/790d929c012051999cf738130ea389f033e340303f4559d575e5281dc9810ab9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:16.116368 systemd[1]: Started cri-containerd-d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2.scope - libcontainer container d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2. Sep 11 23:32:16.127796 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:16.160646 containerd[1535]: time="2025-09-11T23:32:16.160532254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fd884bcf7-8fcbf,Uid:e49b49a8-da14-4dec-baca-eaf949b44951,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2\"" Sep 11 23:32:16.216264 systemd-networkd[1458]: cali19692428129: Gained IPv6LL Sep 11 23:32:16.256139 containerd[1535]: time="2025-09-11T23:32:16.256019983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:16.257141 containerd[1535]: time="2025-09-11T23:32:16.257099302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 11 23:32:16.258641 containerd[1535]: time="2025-09-11T23:32:16.258604501Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:16.263930 containerd[1535]: time="2025-09-11T23:32:16.263889657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:16.264510 containerd[1535]: time="2025-09-11T23:32:16.264469056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 920.893162ms" Sep 11 23:32:16.264510 containerd[1535]: time="2025-09-11T23:32:16.264506256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 11 23:32:16.265657 containerd[1535]: time="2025-09-11T23:32:16.265592376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 23:32:16.268597 containerd[1535]: time="2025-09-11T23:32:16.268554773Z" level=info msg="CreateContainer within sandbox \"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 23:32:16.313938 containerd[1535]: time="2025-09-11T23:32:16.313895740Z" level=info msg="Container 7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:16.332504 containerd[1535]: time="2025-09-11T23:32:16.332440086Z" level=info msg="CreateContainer within sandbox \"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b\"" Sep 11 23:32:16.333075 containerd[1535]: time="2025-09-11T23:32:16.333039165Z" level=info msg="StartContainer for \"7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b\"" Sep 11 23:32:16.334905 containerd[1535]: time="2025-09-11T23:32:16.334869164Z" level=info msg="connecting to shim 7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b" address="unix:///run/containerd/s/91fb73fc91251aaf0e7c78b49de1ecd494c7269b804a72cba7701e39cdba17b7" protocol=ttrpc version=3 Sep 11 23:32:16.362368 systemd[1]: Started cri-containerd-7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b.scope - libcontainer container 7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b. Sep 11 23:32:16.398187 containerd[1535]: time="2025-09-11T23:32:16.398136237Z" level=info msg="StartContainer for \"7cdf1841ba08e0dc625b54abca166e62681569355298af4f800e3b10c539298b\" returns successfully" Sep 11 23:32:16.788092 containerd[1535]: time="2025-09-11T23:32:16.788040627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-pxlgq,Uid:a89bf952-f3e8-4d3f-a4f3-e05f3974ae84,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:16.788299 containerd[1535]: time="2025-09-11T23:32:16.788063507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4799c5c-5clrv,Uid:75746b2c-a7a7-4421-b06d-8194cafe093e,Namespace:calico-system,Attempt:0,}" Sep 11 23:32:16.899301 systemd-networkd[1458]: cali0ae355958c3: Link UP Sep 11 23:32:16.899740 systemd-networkd[1458]: cali0ae355958c3: Gained carrier Sep 11 23:32:16.920603 systemd-networkd[1458]: calicc132275d8e: Gained IPv6LL Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.827 [INFO][4869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--pxlgq-eth0 goldmane-7988f88666- calico-system a89bf952-f3e8-4d3f-a4f3-e05f3974ae84 808 0 2025-09-11 23:31:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-pxlgq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0ae355958c3 [] [] }} ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.828 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.854 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" HandleID="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Workload="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.854 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" HandleID="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Workload="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-pxlgq", "timestamp":"2025-09-11 23:32:16.854207778 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.856 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.857 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.857 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.866 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.871 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.875 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.877 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.880 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.880 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.881 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4 Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.885 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.891 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.892 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" host="localhost" Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.892 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:16.924377 containerd[1535]: 2025-09-11 23:32:16.892 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" HandleID="k8s-pod-network.8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Workload="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.894 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--pxlgq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-pxlgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ae355958c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.894 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.894 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ae355958c3 ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.901 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.905 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--pxlgq-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a89bf952-f3e8-4d3f-a4f3-e05f3974ae84", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4", Pod:"goldmane-7988f88666-pxlgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ae355958c3", MAC:"ce:4a:27:10:90:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:16.925503 containerd[1535]: 2025-09-11 23:32:16.921 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" Namespace="calico-system" Pod="goldmane-7988f88666-pxlgq" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--pxlgq-eth0" Sep 11 23:32:16.973270 containerd[1535]: time="2025-09-11T23:32:16.973225089Z" level=info msg="connecting to shim 8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4" address="unix:///run/containerd/s/4cb56c817a437212e381261c9fb67086451b18f3933442aa02f340a59b8e6ccc" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:16.980738 kubelet[2643]: E0911 23:32:16.980691 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:17.011408 systemd[1]: Started cri-containerd-8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4.scope - libcontainer container 8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4. Sep 11 23:32:17.038111 systemd-networkd[1458]: cali07c310c0040: Link UP Sep 11 23:32:17.041425 systemd-networkd[1458]: cali07c310c0040: Gained carrier Sep 11 23:32:17.046516 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.829 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0 calico-kube-controllers-b4799c5c- calico-system 75746b2c-a7a7-4421-b06d-8194cafe093e 813 0 2025-09-11 23:31:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b4799c5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b4799c5c-5clrv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali07c310c0040 [] [] }} ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.829 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.857 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" HandleID="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Workload="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.857 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" HandleID="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Workload="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b4799c5c-5clrv", "timestamp":"2025-09-11 23:32:16.857365695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.857 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.892 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.892 [INFO][4900] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.968 [INFO][4900] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.983 [INFO][4900] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.996 [INFO][4900] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:16.999 [INFO][4900] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.004 [INFO][4900] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.004 [INFO][4900] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.007 [INFO][4900] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413 Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.014 [INFO][4900] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.025 [INFO][4900] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.025 [INFO][4900] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" host="localhost" Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.025 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 23:32:17.061368 containerd[1535]: 2025-09-11 23:32:17.025 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" HandleID="k8s-pod-network.5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Workload="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.032 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0", GenerateName:"calico-kube-controllers-b4799c5c-", Namespace:"calico-system", SelfLink:"", UID:"75746b2c-a7a7-4421-b06d-8194cafe093e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4799c5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b4799c5c-5clrv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali07c310c0040", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.032 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.032 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07c310c0040 ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.039 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.042 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0", GenerateName:"calico-kube-controllers-b4799c5c-", Namespace:"calico-system", SelfLink:"", UID:"75746b2c-a7a7-4421-b06d-8194cafe093e", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b4799c5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413", Pod:"calico-kube-controllers-b4799c5c-5clrv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali07c310c0040", MAC:"d2:61:69:e7:9f:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 23:32:17.061885 containerd[1535]: 2025-09-11 23:32:17.056 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" Namespace="calico-system" Pod="calico-kube-controllers-b4799c5c-5clrv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b4799c5c--5clrv-eth0" Sep 11 23:32:17.091005 containerd[1535]: time="2025-09-11T23:32:17.090754526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-pxlgq,Uid:a89bf952-f3e8-4d3f-a4f3-e05f3974ae84,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4\"" Sep 11 23:32:17.096419 containerd[1535]: time="2025-09-11T23:32:17.096358322Z" level=info msg="connecting to shim 5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413" address="unix:///run/containerd/s/7f92e5d55f7483ea0669e0d44b4bd7b3b9aacc9c0010436de654eee64c773bed" namespace=k8s.io protocol=ttrpc version=3 Sep 11 23:32:17.122378 systemd[1]: Started cri-containerd-5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413.scope - libcontainer container 5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413. Sep 11 23:32:17.160423 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 23:32:17.177449 systemd-networkd[1458]: calidc9a8a296b9: Gained IPv6LL Sep 11 23:32:17.244168 containerd[1535]: time="2025-09-11T23:32:17.243796859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b4799c5c-5clrv,Uid:75746b2c-a7a7-4421-b06d-8194cafe093e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413\"" Sep 11 23:32:17.432399 systemd-networkd[1458]: cali6789ad4a746: Gained IPv6LL Sep 11 23:32:17.743921 containerd[1535]: time="2025-09-11T23:32:17.743875150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:17.744661 containerd[1535]: time="2025-09-11T23:32:17.744440630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 11 23:32:17.745559 containerd[1535]: time="2025-09-11T23:32:17.745520429Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:17.747574 containerd[1535]: time="2025-09-11T23:32:17.747529468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:17.748269 containerd[1535]: time="2025-09-11T23:32:17.748234547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.482572171s" Sep 11 23:32:17.748269 containerd[1535]: time="2025-09-11T23:32:17.748269187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 23:32:17.749910 containerd[1535]: time="2025-09-11T23:32:17.749540946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 23:32:17.750041 containerd[1535]: time="2025-09-11T23:32:17.750012506Z" level=info msg="CreateContainer within sandbox \"d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:32:17.758190 containerd[1535]: time="2025-09-11T23:32:17.757507221Z" level=info msg="Container 2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:17.765095 containerd[1535]: time="2025-09-11T23:32:17.765028896Z" level=info msg="CreateContainer within sandbox \"d11c229511f2c6bcb1676cb8e80d8b9c729fb700d3186419d083a0e84fc4d24c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5\"" Sep 11 23:32:17.766899 containerd[1535]: time="2025-09-11T23:32:17.765517695Z" level=info msg="StartContainer for \"2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5\"" Sep 11 23:32:17.766899 containerd[1535]: time="2025-09-11T23:32:17.766683374Z" level=info msg="connecting to shim 2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5" address="unix:///run/containerd/s/54d8b8a8364e87968b3207702201a5eba16e7cd6c31a834ea2bd4e65f1fe5083" protocol=ttrpc version=3 Sep 11 23:32:17.792366 systemd[1]: Started cri-containerd-2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5.scope - libcontainer container 2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5. Sep 11 23:32:17.833616 containerd[1535]: time="2025-09-11T23:32:17.833578648Z" level=info msg="StartContainer for \"2733b0c2dd179b38a88335e234ebaed5b016d921d0aea116790aba92d432d4a5\" returns successfully" Sep 11 23:32:17.988557 kubelet[2643]: E0911 23:32:17.988520 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:18.013624 kubelet[2643]: I0911 23:32:18.012882 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fd884bcf7-l7km6" podStartSLOduration=27.353566403 podStartE2EDuration="29.012860843s" podCreationTimestamp="2025-09-11 23:31:49 +0000 UTC" firstStartedPulling="2025-09-11 23:32:16.089668067 +0000 UTC m=+42.398918974" lastFinishedPulling="2025-09-11 23:32:17.748962467 +0000 UTC m=+44.058213414" observedRunningTime="2025-09-11 23:32:18.009887525 +0000 UTC m=+44.319138432" watchObservedRunningTime="2025-09-11 23:32:18.012860843 +0000 UTC m=+44.322111790" Sep 11 23:32:18.045143 containerd[1535]: time="2025-09-11T23:32:18.045081422Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:18.046016 containerd[1535]: time="2025-09-11T23:32:18.045869462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 23:32:18.048634 containerd[1535]: time="2025-09-11T23:32:18.048589020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 298.968114ms" Sep 11 23:32:18.048906 containerd[1535]: time="2025-09-11T23:32:18.048755860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 23:32:18.049995 containerd[1535]: time="2025-09-11T23:32:18.049842339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 23:32:18.053215 containerd[1535]: time="2025-09-11T23:32:18.053068377Z" level=info msg="CreateContainer within sandbox \"d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 23:32:18.069127 containerd[1535]: time="2025-09-11T23:32:18.065202529Z" level=info msg="Container 619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:18.072807 systemd-networkd[1458]: cali07c310c0040: Gained IPv6LL Sep 11 23:32:18.079504 containerd[1535]: time="2025-09-11T23:32:18.079450200Z" level=info msg="CreateContainer within sandbox \"d108e181648fd8753e3324cc51a471afd1f084c3527b46d44b82ecb9522471a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910\"" Sep 11 23:32:18.080217 containerd[1535]: time="2025-09-11T23:32:18.080178439Z" level=info msg="StartContainer for \"619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910\"" Sep 11 23:32:18.081425 containerd[1535]: time="2025-09-11T23:32:18.081396158Z" level=info msg="connecting to shim 619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910" address="unix:///run/containerd/s/790d929c012051999cf738130ea389f033e340303f4559d575e5281dc9810ab9" protocol=ttrpc version=3 Sep 11 23:32:18.110414 systemd[1]: Started cri-containerd-619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910.scope - libcontainer container 619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910. Sep 11 23:32:18.152163 containerd[1535]: time="2025-09-11T23:32:18.151381833Z" level=info msg="StartContainer for \"619e0a9ea6bb4ae04100ae16fa59c691f9f060b31a060d646f55be0e2d5c0910\" returns successfully" Sep 11 23:32:18.584323 systemd-networkd[1458]: cali0ae355958c3: Gained IPv6LL Sep 11 23:32:18.993726 kubelet[2643]: I0911 23:32:18.993687 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:32:18.996380 kubelet[2643]: E0911 23:32:18.996349 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:19.025529 kubelet[2643]: I0911 23:32:19.025455 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fd884bcf7-8fcbf" podStartSLOduration=28.137694296 podStartE2EDuration="30.025433982s" podCreationTimestamp="2025-09-11 23:31:49 +0000 UTC" firstStartedPulling="2025-09-11 23:32:16.161842133 +0000 UTC m=+42.471093080" lastFinishedPulling="2025-09-11 23:32:18.049581819 +0000 UTC m=+44.358832766" observedRunningTime="2025-09-11 23:32:19.019935786 +0000 UTC m=+45.329186733" watchObservedRunningTime="2025-09-11 23:32:19.025433982 +0000 UTC m=+45.334684929" Sep 11 23:32:19.302576 containerd[1535]: time="2025-09-11T23:32:19.302448492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:19.303727 containerd[1535]: time="2025-09-11T23:32:19.303686412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 11 23:32:19.304781 containerd[1535]: time="2025-09-11T23:32:19.304748971Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:19.307093 containerd[1535]: time="2025-09-11T23:32:19.307055970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:19.309309 containerd[1535]: time="2025-09-11T23:32:19.309270808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.259091349s" Sep 11 23:32:19.309351 containerd[1535]: time="2025-09-11T23:32:19.309314528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 11 23:32:19.310385 containerd[1535]: time="2025-09-11T23:32:19.310357767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 23:32:19.315780 containerd[1535]: time="2025-09-11T23:32:19.315654804Z" level=info msg="CreateContainer within sandbox \"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 23:32:19.325829 containerd[1535]: time="2025-09-11T23:32:19.325784438Z" level=info msg="Container a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:19.339815 containerd[1535]: time="2025-09-11T23:32:19.339765709Z" level=info msg="CreateContainer within sandbox \"d7a62b2f5386633afa532b50f61c044b70ccdaa04dc48c93d111b1865a1e5ede\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b\"" Sep 11 23:32:19.340842 containerd[1535]: time="2025-09-11T23:32:19.340781829Z" level=info msg="StartContainer for \"a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b\"" Sep 11 23:32:19.342486 containerd[1535]: time="2025-09-11T23:32:19.342456988Z" level=info msg="connecting to shim a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b" address="unix:///run/containerd/s/91fb73fc91251aaf0e7c78b49de1ecd494c7269b804a72cba7701e39cdba17b7" protocol=ttrpc version=3 Sep 11 23:32:19.383350 systemd[1]: Started cri-containerd-a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b.scope - libcontainer container a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b. Sep 11 23:32:19.424033 containerd[1535]: time="2025-09-11T23:32:19.423988658Z" level=info msg="StartContainer for \"a8161cf82b00cae754325f96e5d2578cac0e2b92001d519603abf627af81344b\" returns successfully" Sep 11 23:32:19.863632 kubelet[2643]: I0911 23:32:19.863582 2643 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 23:32:19.872641 kubelet[2643]: I0911 23:32:19.872569 2643 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 23:32:20.029922 kubelet[2643]: I0911 23:32:20.029797 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5bhns" podStartSLOduration=23.061187736 podStartE2EDuration="27.029777328s" podCreationTimestamp="2025-09-11 23:31:53 +0000 UTC" firstStartedPulling="2025-09-11 23:32:15.341534976 +0000 UTC m=+41.650785923" lastFinishedPulling="2025-09-11 23:32:19.310124608 +0000 UTC m=+45.619375515" observedRunningTime="2025-09-11 23:32:20.027261689 +0000 UTC m=+46.336512636" watchObservedRunningTime="2025-09-11 23:32:20.029777328 +0000 UTC m=+46.339028275" Sep 11 23:32:20.757987 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:52560.service - OpenSSH per-connection server daemon (10.0.0.1:52560). Sep 11 23:32:20.848247 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 52560 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:20.851629 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:20.858726 systemd-logind[1506]: New session 9 of user core. Sep 11 23:32:20.867493 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 23:32:21.026321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976446506.mount: Deactivated successfully. Sep 11 23:32:21.200564 sshd[5169]: Connection closed by 10.0.0.1 port 52560 Sep 11 23:32:21.200918 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:21.206876 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:52560.service: Deactivated successfully. Sep 11 23:32:21.212585 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 23:32:21.214639 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. Sep 11 23:32:21.216921 systemd-logind[1506]: Removed session 9. Sep 11 23:32:21.477114 containerd[1535]: time="2025-09-11T23:32:21.476971593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:21.477795 containerd[1535]: time="2025-09-11T23:32:21.477758433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 11 23:32:21.479239 containerd[1535]: time="2025-09-11T23:32:21.479187352Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:21.482501 containerd[1535]: time="2025-09-11T23:32:21.482431670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:21.484128 containerd[1535]: time="2025-09-11T23:32:21.484041149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.173641142s" Sep 11 23:32:21.484128 containerd[1535]: time="2025-09-11T23:32:21.484079189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 11 23:32:21.486999 containerd[1535]: time="2025-09-11T23:32:21.486910748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 23:32:21.488646 containerd[1535]: time="2025-09-11T23:32:21.488587507Z" level=info msg="CreateContainer within sandbox \"8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 23:32:21.501345 containerd[1535]: time="2025-09-11T23:32:21.501292140Z" level=info msg="Container 106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:21.511107 containerd[1535]: time="2025-09-11T23:32:21.511028215Z" level=info msg="CreateContainer within sandbox \"8c2671245ac44a5d2a51bd5aaa2076bd02c10c928465db5a0cf01797de5aaac4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\"" Sep 11 23:32:21.512334 containerd[1535]: time="2025-09-11T23:32:21.511746014Z" level=info msg="StartContainer for \"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\"" Sep 11 23:32:21.513478 containerd[1535]: time="2025-09-11T23:32:21.513450853Z" level=info msg="connecting to shim 106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d" address="unix:///run/containerd/s/4cb56c817a437212e381261c9fb67086451b18f3933442aa02f340a59b8e6ccc" protocol=ttrpc version=3 Sep 11 23:32:21.541366 systemd[1]: Started cri-containerd-106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d.scope - libcontainer container 106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d. Sep 11 23:32:21.593118 containerd[1535]: time="2025-09-11T23:32:21.593027730Z" level=info msg="StartContainer for \"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\" returns successfully" Sep 11 23:32:22.024788 kubelet[2643]: I0911 23:32:22.024325 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-pxlgq" podStartSLOduration=25.631216155 podStartE2EDuration="30.024305939s" podCreationTimestamp="2025-09-11 23:31:52 +0000 UTC" firstStartedPulling="2025-09-11 23:32:17.092749004 +0000 UTC m=+43.401999951" lastFinishedPulling="2025-09-11 23:32:21.485838788 +0000 UTC m=+47.795089735" observedRunningTime="2025-09-11 23:32:22.023361419 +0000 UTC m=+48.332612366" watchObservedRunningTime="2025-09-11 23:32:22.024305939 +0000 UTC m=+48.333556886" Sep 11 23:32:23.014142 kubelet[2643]: I0911 23:32:23.014111 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:32:23.635609 containerd[1535]: time="2025-09-11T23:32:23.635465425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\" id:\"094619e3223feebf2470ad92bed513cb9161e76cbc9e341e4e9933b3f3482993\" pid:5250 exit_status:1 exited_at:{seconds:1757633543 nanos:628795988}" Sep 11 23:32:23.709876 containerd[1535]: time="2025-09-11T23:32:23.709777990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\" id:\"a481d23614f9dc3ac6f9b0257d96b66d3fce71925eca7238b387ddf0d1a5f50a\" pid:5274 exit_status:1 exited_at:{seconds:1757633543 nanos:709415590}" Sep 11 23:32:24.092013 containerd[1535]: time="2025-09-11T23:32:24.091974932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"106bc179c318193a5449b2b7a82daa4feafbd2f0d456e6e3b106f83b3739c97d\" id:\"a886bb0271d010dd668c3dc87d2bdcde01754787816dff57e16bd63789f5a45d\" pid:5303 exit_status:1 exited_at:{seconds:1757633544 nanos:91684652}" Sep 11 23:32:24.978976 containerd[1535]: time="2025-09-11T23:32:24.978921378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:24.979535 containerd[1535]: time="2025-09-11T23:32:24.979499818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 11 23:32:24.981204 containerd[1535]: time="2025-09-11T23:32:24.981171457Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:24.984060 containerd[1535]: time="2025-09-11T23:32:24.983995456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 23:32:24.984776 containerd[1535]: time="2025-09-11T23:32:24.984593935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.497645227s" Sep 11 23:32:24.984776 containerd[1535]: time="2025-09-11T23:32:24.984630015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 11 23:32:25.000238 containerd[1535]: time="2025-09-11T23:32:25.000193528Z" level=info msg="CreateContainer within sandbox \"5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 23:32:25.009687 containerd[1535]: time="2025-09-11T23:32:25.009435285Z" level=info msg="Container 0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e: CDI devices from CRI Config.CDIDevices: []" Sep 11 23:32:25.017049 containerd[1535]: time="2025-09-11T23:32:25.017007441Z" level=info msg="CreateContainer within sandbox \"5ae75c9470534980f71a862576181992cdb0e19ee9ffd90488b07de0da446413\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e\"" Sep 11 23:32:25.019170 containerd[1535]: time="2025-09-11T23:32:25.017801881Z" level=info msg="StartContainer for \"0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e\"" Sep 11 23:32:25.019170 containerd[1535]: time="2025-09-11T23:32:25.018929401Z" level=info msg="connecting to shim 0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e" address="unix:///run/containerd/s/7f92e5d55f7483ea0669e0d44b4bd7b3b9aacc9c0010436de654eee64c773bed" protocol=ttrpc version=3 Sep 11 23:32:25.044408 systemd[1]: Started cri-containerd-0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e.scope - libcontainer container 0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e. Sep 11 23:32:25.089482 containerd[1535]: time="2025-09-11T23:32:25.089438611Z" level=info msg="StartContainer for \"0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e\" returns successfully" Sep 11 23:32:26.047292 kubelet[2643]: I0911 23:32:26.046995 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b4799c5c-5clrv" podStartSLOduration=25.306698377 podStartE2EDuration="33.046684934s" podCreationTimestamp="2025-09-11 23:31:53 +0000 UTC" firstStartedPulling="2025-09-11 23:32:17.245315698 +0000 UTC m=+43.554566645" lastFinishedPulling="2025-09-11 23:32:24.985302295 +0000 UTC m=+51.294553202" observedRunningTime="2025-09-11 23:32:26.046277654 +0000 UTC m=+52.355528601" watchObservedRunningTime="2025-09-11 23:32:26.046684934 +0000 UTC m=+52.355935881" Sep 11 23:32:26.065775 containerd[1535]: time="2025-09-11T23:32:26.065727087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e\" id:\"8e7903ca8ff37ba8fcfd36ec89ea924f5f31b70c4798c299bbad47325ec67fb2\" pid:5376 exited_at:{seconds:1757633546 nanos:64879207}" Sep 11 23:32:26.215126 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:52576.service - OpenSSH per-connection server daemon (10.0.0.1:52576). Sep 11 23:32:26.269921 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 52576 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:26.271669 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:26.276231 systemd-logind[1506]: New session 10 of user core. Sep 11 23:32:26.289472 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 23:32:26.593414 sshd[5391]: Connection closed by 10.0.0.1 port 52576 Sep 11 23:32:26.593917 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:26.605957 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:52576.service: Deactivated successfully. Sep 11 23:32:26.608073 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 23:32:26.611202 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. Sep 11 23:32:26.614698 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:52582.service - OpenSSH per-connection server daemon (10.0.0.1:52582). Sep 11 23:32:26.616001 systemd-logind[1506]: Removed session 10. Sep 11 23:32:26.662384 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 52582 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:26.664009 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:26.669165 systemd-logind[1506]: New session 11 of user core. Sep 11 23:32:26.677387 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 23:32:26.858365 sshd[5412]: Connection closed by 10.0.0.1 port 52582 Sep 11 23:32:26.859165 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:26.869401 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:52582.service: Deactivated successfully. Sep 11 23:32:26.872173 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 23:32:26.875115 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. Sep 11 23:32:26.883373 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:52598.service - OpenSSH per-connection server daemon (10.0.0.1:52598). Sep 11 23:32:26.884979 systemd-logind[1506]: Removed session 11. Sep 11 23:32:26.936431 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:26.937856 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:26.942552 systemd-logind[1506]: New session 12 of user core. Sep 11 23:32:26.948312 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 23:32:27.105056 sshd[5429]: Connection closed by 10.0.0.1 port 52598 Sep 11 23:32:27.105409 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:27.108687 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:52598.service: Deactivated successfully. Sep 11 23:32:27.111658 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 23:32:27.112825 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. Sep 11 23:32:27.115746 systemd-logind[1506]: Removed session 12. Sep 11 23:32:29.683332 containerd[1535]: time="2025-09-11T23:32:29.683279554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41baa7f7c36fb5f7de3a70285ddb4d88ab7c8bda0b63a9e1ed920f0a0722fb71\" id:\"b64e043104a0c7a135c04d1c99f2b71bc58dc148dad702c63a43f5d6d756978b\" pid:5455 exit_status:1 exited_at:{seconds:1757633549 nanos:682802794}" Sep 11 23:32:32.122614 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:43690.service - OpenSSH per-connection server daemon (10.0.0.1:43690). Sep 11 23:32:32.189054 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:32.190688 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:32.194544 systemd-logind[1506]: New session 13 of user core. Sep 11 23:32:32.201335 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 23:32:32.345232 sshd[5470]: Connection closed by 10.0.0.1 port 43690 Sep 11 23:32:32.345599 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:32.357902 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:43690.service: Deactivated successfully. Sep 11 23:32:32.360853 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 23:32:32.361673 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. Sep 11 23:32:32.363848 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:43702.service - OpenSSH per-connection server daemon (10.0.0.1:43702). Sep 11 23:32:32.364761 systemd-logind[1506]: Removed session 13. Sep 11 23:32:32.424446 sshd[5483]: Accepted publickey for core from 10.0.0.1 port 43702 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:32.425895 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:32.432287 systemd-logind[1506]: New session 14 of user core. Sep 11 23:32:32.445384 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 23:32:32.673212 sshd[5485]: Connection closed by 10.0.0.1 port 43702 Sep 11 23:32:32.674615 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:32.682896 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:43702.service: Deactivated successfully. Sep 11 23:32:32.685715 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 23:32:32.686602 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. Sep 11 23:32:32.689271 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:43710.service - OpenSSH per-connection server daemon (10.0.0.1:43710). Sep 11 23:32:32.689792 systemd-logind[1506]: Removed session 14. Sep 11 23:32:32.750216 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 43710 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:32.751917 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:32.756181 systemd-logind[1506]: New session 15 of user core. Sep 11 23:32:32.766334 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 23:32:34.408269 sshd[5498]: Connection closed by 10.0.0.1 port 43710 Sep 11 23:32:34.408682 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:34.425455 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:43710.service: Deactivated successfully. Sep 11 23:32:34.428464 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 23:32:34.428670 systemd[1]: session-15.scope: Consumed 545ms CPU time, 72.2M memory peak. Sep 11 23:32:34.429292 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. Sep 11 23:32:34.433294 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:43722.service - OpenSSH per-connection server daemon (10.0.0.1:43722). Sep 11 23:32:34.435176 systemd-logind[1506]: Removed session 15. Sep 11 23:32:34.490847 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 43722 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:34.492251 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:34.496710 systemd-logind[1506]: New session 16 of user core. Sep 11 23:32:34.507319 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 23:32:34.900767 sshd[5526]: Connection closed by 10.0.0.1 port 43722 Sep 11 23:32:34.901349 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:34.914406 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:43722.service: Deactivated successfully. Sep 11 23:32:34.916536 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 23:32:34.917532 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. Sep 11 23:32:34.920690 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:43724.service - OpenSSH per-connection server daemon (10.0.0.1:43724). Sep 11 23:32:34.921967 systemd-logind[1506]: Removed session 16. Sep 11 23:32:34.973294 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 43724 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:34.974916 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:34.979215 systemd-logind[1506]: New session 17 of user core. Sep 11 23:32:34.988334 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 23:32:35.131668 sshd[5539]: Connection closed by 10.0.0.1 port 43724 Sep 11 23:32:35.132047 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:35.135673 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. Sep 11 23:32:35.135903 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:43724.service: Deactivated successfully. Sep 11 23:32:35.138830 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 23:32:35.140520 systemd-logind[1506]: Removed session 17. Sep 11 23:32:35.923101 containerd[1535]: time="2025-09-11T23:32:35.923048840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f4c6cf972b08c0827a137686e9aa5b9af50b0d54a63e6d0477c40cc4d78ba4e\" id:\"f01f8ec0f22d50ffd5dc75a0f21922904f7914944ace53976f664e526fa44adf\" pid:5563 exited_at:{seconds:1757633555 nanos:922795520}" Sep 11 23:32:38.786285 kubelet[2643]: I0911 23:32:38.786239 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 23:32:40.144108 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:50930.service - OpenSSH per-connection server daemon (10.0.0.1:50930). Sep 11 23:32:40.212450 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 50930 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:40.214415 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:40.219111 systemd-logind[1506]: New session 18 of user core. Sep 11 23:32:40.229338 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 23:32:40.364074 sshd[5584]: Connection closed by 10.0.0.1 port 50930 Sep 11 23:32:40.364436 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:40.367903 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:50930.service: Deactivated successfully. Sep 11 23:32:40.369781 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 23:32:40.371323 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. Sep 11 23:32:40.373207 systemd-logind[1506]: Removed session 18. Sep 11 23:32:45.377249 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:50938.service - OpenSSH per-connection server daemon (10.0.0.1:50938). Sep 11 23:32:45.418219 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 50938 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:45.419564 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:45.423795 systemd-logind[1506]: New session 19 of user core. Sep 11 23:32:45.431377 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 23:32:45.550908 sshd[5603]: Connection closed by 10.0.0.1 port 50938 Sep 11 23:32:45.551267 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:45.555229 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:50938.service: Deactivated successfully. Sep 11 23:32:45.557646 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 23:32:45.558466 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. Sep 11 23:32:45.559531 systemd-logind[1506]: Removed session 19. Sep 11 23:32:46.787945 kubelet[2643]: E0911 23:32:46.787783 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:47.799287 kubelet[2643]: E0911 23:32:47.799258 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 23:32:50.566832 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Sep 11 23:32:50.632516 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:80++t0IeckZLj/QdvkSdz/mmpT+BBkM5LAEfe6Fhv0M Sep 11 23:32:50.634036 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 23:32:50.638326 systemd-logind[1506]: New session 20 of user core. Sep 11 23:32:50.648344 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 23:32:50.791666 sshd[5619]: Connection closed by 10.0.0.1 port 41314 Sep 11 23:32:50.792384 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Sep 11 23:32:50.795811 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:41314.service: Deactivated successfully. Sep 11 23:32:50.797958 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 23:32:50.799835 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. Sep 11 23:32:50.800993 systemd-logind[1506]: Removed session 20.