May 12 12:53:55.807372 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 12 12:53:55.807393 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon May 12 11:28:01 -00 2025 May 12 12:53:55.807402 kernel: KASLR enabled May 12 12:53:55.807407 kernel: efi: EFI v2.7 by EDK II May 12 12:53:55.807413 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 12 12:53:55.807418 kernel: random: crng init done May 12 12:53:55.807425 kernel: secureboot: Secure boot disabled May 12 12:53:55.807430 kernel: ACPI: Early table checksum verification disabled May 12 12:53:55.807436 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 12 12:53:55.807443 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 12 12:53:55.807448 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807454 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807459 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807465 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807472 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807479 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807485 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807491 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807497 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 12:53:55.807503 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 12 12:53:55.807509 kernel: ACPI: Use ACPI SPCR as default console: Yes May 12 12:53:55.807515 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 12 12:53:55.807521 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 12 12:53:55.807527 kernel: Zone ranges: May 12 12:53:55.807533 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 12 12:53:55.807540 kernel: DMA32 empty May 12 12:53:55.807546 kernel: Normal empty May 12 12:53:55.807552 kernel: Device empty May 12 12:53:55.807558 kernel: Movable zone start for each node May 12 12:53:55.807563 kernel: Early memory node ranges May 12 12:53:55.807569 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 12 12:53:55.807575 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 12 12:53:55.807581 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 12 12:53:55.807587 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 12 12:53:55.807593 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 12 12:53:55.807599 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 12 12:53:55.807605 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 12 12:53:55.807612 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 12 12:53:55.807618 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 12 12:53:55.807624 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 12 12:53:55.807633 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 12 12:53:55.807639 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 12 12:53:55.807645 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 12 12:53:55.807653 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 12 12:53:55.807659 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 12 12:53:55.807666 kernel: psci: probing for conduit method from ACPI. May 12 12:53:55.807672 kernel: psci: PSCIv1.1 detected in firmware. May 12 12:53:55.807678 kernel: psci: Using standard PSCI v0.2 function IDs May 12 12:53:55.807684 kernel: psci: Trusted OS migration not required May 12 12:53:55.807690 kernel: psci: SMC Calling Convention v1.1 May 12 12:53:55.807697 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 12 12:53:55.807703 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 12 12:53:55.807710 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 12 12:53:55.807718 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 12 12:53:55.807724 kernel: Detected PIPT I-cache on CPU0 May 12 12:53:55.807730 kernel: CPU features: detected: GIC system register CPU interface May 12 12:53:55.807736 kernel: CPU features: detected: Spectre-v4 May 12 12:53:55.807743 kernel: CPU features: detected: Spectre-BHB May 12 12:53:55.807749 kernel: CPU features: kernel page table isolation forced ON by KASLR May 12 12:53:55.807756 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 12 12:53:55.807762 kernel: CPU features: detected: ARM erratum 1418040 May 12 12:53:55.807768 kernel: CPU features: detected: SSBS not fully self-synchronizing May 12 12:53:55.807775 kernel: alternatives: applying boot alternatives May 12 12:53:55.807782 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db1cab4675737b11381d09c3bd697a21f5e572397084a94e6025aaadcb33c7b2 May 12 12:53:55.807790 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 12:53:55.807797 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 12:53:55.807804 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 12:53:55.807810 kernel: Fallback order for Node 0: 0 May 12 12:53:55.807816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 12 12:53:55.807822 kernel: Policy zone: DMA May 12 12:53:55.807829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 12:53:55.807885 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 12 12:53:55.807893 kernel: software IO TLB: area num 4. May 12 12:53:55.807899 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 12 12:53:55.807906 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 12 12:53:55.807912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 12:53:55.807921 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 12:53:55.807928 kernel: rcu: RCU event tracing is enabled. May 12 12:53:55.807935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 12:53:55.807941 kernel: Trampoline variant of Tasks RCU enabled. May 12 12:53:55.807948 kernel: Tracing variant of Tasks RCU enabled. May 12 12:53:55.807954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 12:53:55.807961 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 12:53:55.807967 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 12:53:55.807974 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 12:53:55.807980 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 12 12:53:55.807986 kernel: GICv3: 256 SPIs implemented May 12 12:53:55.807994 kernel: GICv3: 0 Extended SPIs implemented May 12 12:53:55.808000 kernel: Root IRQ handler: gic_handle_irq May 12 12:53:55.808006 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 12 12:53:55.808013 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 12 12:53:55.808019 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 12 12:53:55.808025 kernel: ITS [mem 0x08080000-0x0809ffff] May 12 12:53:55.808032 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 12 12:53:55.808038 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 12 12:53:55.808044 kernel: GICv3: using LPI property table @0x0000000040100000 May 12 12:53:55.808051 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 12 12:53:55.808057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 12:53:55.808064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 12:53:55.808071 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 12 12:53:55.808078 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 12 12:53:55.808084 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 12 12:53:55.808091 kernel: arm-pv: using stolen time PV May 12 12:53:55.808098 kernel: Console: colour dummy device 80x25 May 12 12:53:55.808104 kernel: ACPI: Core revision 20240827 May 12 12:53:55.808111 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 12 12:53:55.808118 kernel: pid_max: default: 32768 minimum: 301 May 12 12:53:55.808124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 12 12:53:55.808132 kernel: landlock: Up and running. May 12 12:53:55.808139 kernel: SELinux: Initializing. May 12 12:53:55.808145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 12:53:55.808152 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 12:53:55.808158 kernel: rcu: Hierarchical SRCU implementation. May 12 12:53:55.808165 kernel: rcu: Max phase no-delay instances is 400. May 12 12:53:55.808172 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 12 12:53:55.808178 kernel: Remapping and enabling EFI services. May 12 12:53:55.808185 kernel: smp: Bringing up secondary CPUs ... May 12 12:53:55.808191 kernel: Detected PIPT I-cache on CPU1 May 12 12:53:55.808204 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 12 12:53:55.808211 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 12 12:53:55.808219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 12:53:55.808225 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 12 12:53:55.808232 kernel: Detected PIPT I-cache on CPU2 May 12 12:53:55.808239 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 12 12:53:55.808246 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 12 12:53:55.808254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 12:53:55.808261 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 12 12:53:55.808268 kernel: Detected PIPT I-cache on CPU3 May 12 12:53:55.808275 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 12 12:53:55.808282 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 12 12:53:55.808289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 12:53:55.808295 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 12 12:53:55.808302 kernel: smp: Brought up 1 node, 4 CPUs May 12 12:53:55.808309 kernel: SMP: Total of 4 processors activated. May 12 12:53:55.808316 kernel: CPU: All CPU(s) started at EL1 May 12 12:53:55.808324 kernel: CPU features: detected: 32-bit EL0 Support May 12 12:53:55.808331 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 12 12:53:55.808338 kernel: CPU features: detected: Common not Private translations May 12 12:53:55.808344 kernel: CPU features: detected: CRC32 instructions May 12 12:53:55.808351 kernel: CPU features: detected: Enhanced Virtualization Traps May 12 12:53:55.808358 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 12 12:53:55.808365 kernel: CPU features: detected: LSE atomic instructions May 12 12:53:55.808372 kernel: CPU features: detected: Privileged Access Never May 12 12:53:55.808379 kernel: CPU features: detected: RAS Extension Support May 12 12:53:55.808387 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 12 12:53:55.808394 kernel: alternatives: applying system-wide alternatives May 12 12:53:55.808400 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 12 12:53:55.808408 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 12 12:53:55.808415 kernel: devtmpfs: initialized May 12 12:53:55.808422 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 12:53:55.808429 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 12:53:55.808436 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 12 12:53:55.808442 kernel: 0 pages in range for non-PLT usage May 12 12:53:55.808451 kernel: 508544 pages in range for PLT usage May 12 12:53:55.808458 kernel: pinctrl core: initialized pinctrl subsystem May 12 12:53:55.808464 kernel: SMBIOS 3.0.0 present. May 12 12:53:55.808471 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 12 12:53:55.808478 kernel: DMI: Memory slots populated: 1/1 May 12 12:53:55.808485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 12:53:55.808492 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 12 12:53:55.808499 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 12 12:53:55.808506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 12 12:53:55.808514 kernel: audit: initializing netlink subsys (disabled) May 12 12:53:55.808520 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 12 12:53:55.808527 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 12:53:55.808534 kernel: cpuidle: using governor menu May 12 12:53:55.808541 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 12 12:53:55.808547 kernel: ASID allocator initialised with 32768 entries May 12 12:53:55.808554 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 12:53:55.808561 kernel: Serial: AMBA PL011 UART driver May 12 12:53:55.808568 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 12:53:55.808576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 12 12:53:55.808583 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 12 12:53:55.808589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 12 12:53:55.808596 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 12:53:55.808603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 12 12:53:55.808610 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 12 12:53:55.808617 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 12 12:53:55.808624 kernel: ACPI: Added _OSI(Module Device) May 12 12:53:55.808631 kernel: ACPI: Added _OSI(Processor Device) May 12 12:53:55.808639 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 12:53:55.808646 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 12:53:55.808653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 12:53:55.808660 kernel: ACPI: Interpreter enabled May 12 12:53:55.808667 kernel: ACPI: Using GIC for interrupt routing May 12 12:53:55.808673 kernel: ACPI: MCFG table detected, 1 entries May 12 12:53:55.808680 kernel: ACPI: CPU0 has been hot-added May 12 12:53:55.808687 kernel: ACPI: CPU1 has been hot-added May 12 12:53:55.808694 kernel: ACPI: CPU2 has been hot-added May 12 12:53:55.808702 kernel: ACPI: CPU3 has been hot-added May 12 12:53:55.808709 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 12 12:53:55.808715 kernel: printk: legacy console [ttyAMA0] enabled May 12 12:53:55.808722 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 12:53:55.808869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 12:53:55.808941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 12 12:53:55.809000 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 12 12:53:55.809058 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 12 12:53:55.809120 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 12 12:53:55.809130 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 12 12:53:55.809137 kernel: PCI host bridge to bus 0000:00 May 12 12:53:55.809202 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 12 12:53:55.809261 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 12 12:53:55.809316 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 12 12:53:55.809368 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 12:53:55.809443 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 12 12:53:55.809512 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 12 12:53:55.809575 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 12 12:53:55.809634 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 12 12:53:55.809693 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 12 12:53:55.809752 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 12 12:53:55.809813 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 12 12:53:55.809903 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 12 12:53:55.809963 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 12 12:53:55.810016 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 12 12:53:55.810071 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 12 12:53:55.810080 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 12 12:53:55.810088 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 12 12:53:55.810095 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 12 12:53:55.810105 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 12 12:53:55.810112 kernel: iommu: Default domain type: Translated May 12 12:53:55.810119 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 12 12:53:55.810125 kernel: efivars: Registered efivars operations May 12 12:53:55.810133 kernel: vgaarb: loaded May 12 12:53:55.810139 kernel: clocksource: Switched to clocksource arch_sys_counter May 12 12:53:55.810146 kernel: VFS: Disk quotas dquot_6.6.0 May 12 12:53:55.810154 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 12:53:55.810160 kernel: pnp: PnP ACPI init May 12 12:53:55.810232 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 12 12:53:55.810242 kernel: pnp: PnP ACPI: found 1 devices May 12 12:53:55.810249 kernel: NET: Registered PF_INET protocol family May 12 12:53:55.810256 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 12:53:55.810264 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 12:53:55.810271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 12:53:55.810278 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 12:53:55.810285 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 12:53:55.810293 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 12:53:55.810300 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 12:53:55.810307 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 12:53:55.810314 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 12:53:55.810321 kernel: PCI: CLS 0 bytes, default 64 May 12 12:53:55.810328 kernel: kvm [1]: HYP mode not available May 12 12:53:55.810335 kernel: Initialise system trusted keyrings May 12 12:53:55.810342 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 12:53:55.810349 kernel: Key type asymmetric registered May 12 12:53:55.810356 kernel: Asymmetric key parser 'x509' registered May 12 12:53:55.810363 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 12 12:53:55.810370 kernel: io scheduler mq-deadline registered May 12 12:53:55.810377 kernel: io scheduler kyber registered May 12 12:53:55.810384 kernel: io scheduler bfq registered May 12 12:53:55.810391 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 12 12:53:55.810398 kernel: ACPI: button: Power Button [PWRB] May 12 12:53:55.810405 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 12 12:53:55.810464 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 12 12:53:55.810475 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 12:53:55.810482 kernel: thunder_xcv, ver 1.0 May 12 12:53:55.810489 kernel: thunder_bgx, ver 1.0 May 12 12:53:55.810496 kernel: nicpf, ver 1.0 May 12 12:53:55.810502 kernel: nicvf, ver 1.0 May 12 12:53:55.810567 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 12 12:53:55.810623 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-12T12:53:55 UTC (1747054435) May 12 12:53:55.810632 kernel: hid: raw HID events driver (C) Jiri Kosina May 12 12:53:55.810641 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 12 12:53:55.810655 kernel: watchdog: NMI not fully supported May 12 12:53:55.810662 kernel: watchdog: Hard watchdog permanently disabled May 12 12:53:55.810669 kernel: NET: Registered PF_INET6 protocol family May 12 12:53:55.810676 kernel: Segment Routing with IPv6 May 12 12:53:55.810683 kernel: In-situ OAM (IOAM) with IPv6 May 12 12:53:55.810690 kernel: NET: Registered PF_PACKET protocol family May 12 12:53:55.810696 kernel: Key type dns_resolver registered May 12 12:53:55.810703 kernel: registered taskstats version 1 May 12 12:53:55.810712 kernel: Loading compiled-in X.509 certificates May 12 12:53:55.810719 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 70332c3c97692377fe4e22d618d0d7d042cdb7d6' May 12 12:53:55.810726 kernel: Demotion targets for Node 0: null May 12 12:53:55.810732 kernel: Key type .fscrypt registered May 12 12:53:55.810739 kernel: Key type fscrypt-provisioning registered May 12 12:53:55.810746 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 12:53:55.810753 kernel: ima: Allocated hash algorithm: sha1 May 12 12:53:55.810759 kernel: ima: No architecture policies found May 12 12:53:55.810766 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 12 12:53:55.810774 kernel: clk: Disabling unused clocks May 12 12:53:55.810781 kernel: PM: genpd: Disabling unused power domains May 12 12:53:55.810788 kernel: Warning: unable to open an initial console. May 12 12:53:55.810795 kernel: Freeing unused kernel memory: 39424K May 12 12:53:55.810802 kernel: Run /init as init process May 12 12:53:55.810809 kernel: with arguments: May 12 12:53:55.810816 kernel: /init May 12 12:53:55.810822 kernel: with environment: May 12 12:53:55.810829 kernel: HOME=/ May 12 12:53:55.810851 kernel: TERM=linux May 12 12:53:55.810858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 12:53:55.810866 systemd[1]: Successfully made /usr/ read-only. May 12 12:53:55.810876 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 12:53:55.810884 systemd[1]: Detected virtualization kvm. May 12 12:53:55.810891 systemd[1]: Detected architecture arm64. May 12 12:53:55.810899 systemd[1]: Running in initrd. May 12 12:53:55.810906 systemd[1]: No hostname configured, using default hostname. May 12 12:53:55.810915 systemd[1]: Hostname set to . May 12 12:53:55.810922 systemd[1]: Initializing machine ID from VM UUID. May 12 12:53:55.810930 systemd[1]: Queued start job for default target initrd.target. May 12 12:53:55.810937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 12:53:55.810945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 12:53:55.810952 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 12:53:55.810960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 12:53:55.810968 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 12:53:55.810977 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 12:53:55.810985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 12:53:55.810993 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 12:53:55.811001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 12:53:55.811008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 12:53:55.811015 systemd[1]: Reached target paths.target - Path Units. May 12 12:53:55.811024 systemd[1]: Reached target slices.target - Slice Units. May 12 12:53:55.811032 systemd[1]: Reached target swap.target - Swaps. May 12 12:53:55.811039 systemd[1]: Reached target timers.target - Timer Units. May 12 12:53:55.811047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 12:53:55.811054 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 12:53:55.811062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 12:53:55.811069 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 12:53:55.811077 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 12:53:55.811084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 12:53:55.811093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 12:53:55.811100 systemd[1]: Reached target sockets.target - Socket Units. May 12 12:53:55.811108 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 12:53:55.811115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 12:53:55.811122 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 12:53:55.811130 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 12 12:53:55.811138 systemd[1]: Starting systemd-fsck-usr.service... May 12 12:53:55.811145 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 12:53:55.811154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 12:53:55.811161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 12:53:55.811169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 12:53:55.811176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 12:53:55.811184 systemd[1]: Finished systemd-fsck-usr.service. May 12 12:53:55.811193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 12:53:55.811217 systemd-journald[243]: Collecting audit messages is disabled. May 12 12:53:55.811235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 12:53:55.811244 systemd-journald[243]: Journal started May 12 12:53:55.811264 systemd-journald[243]: Runtime Journal (/run/log/journal/4be0459a59264d5ebcd1a92dbe05a1f4) is 6M, max 48.5M, 42.4M free. May 12 12:53:55.802135 systemd-modules-load[244]: Inserted module 'overlay' May 12 12:53:55.813127 systemd[1]: Started systemd-journald.service - Journal Service. May 12 12:53:55.816262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 12:53:55.817977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 12:53:55.821948 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 12:53:55.821927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 12:53:55.825483 kernel: Bridge firewalling registered May 12 12:53:55.823727 systemd-modules-load[244]: Inserted module 'br_netfilter' May 12 12:53:55.827966 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 12:53:55.830450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 12:53:55.833139 systemd-tmpfiles[262]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 12 12:53:55.833971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 12:53:55.836407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 12:53:55.843730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 12:53:55.845215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 12:53:55.847332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 12:53:55.851200 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 12:53:55.853407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 12:53:55.878601 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db1cab4675737b11381d09c3bd697a21f5e572397084a94e6025aaadcb33c7b2 May 12 12:53:55.893852 systemd-resolved[288]: Positive Trust Anchors: May 12 12:53:55.893869 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 12:53:55.893901 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 12:53:55.898528 systemd-resolved[288]: Defaulting to hostname 'linux'. May 12 12:53:55.899480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 12:53:55.902979 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 12:53:55.958876 kernel: SCSI subsystem initialized May 12 12:53:55.962857 kernel: Loading iSCSI transport class v2.0-870. May 12 12:53:55.971902 kernel: iscsi: registered transport (tcp) May 12 12:53:55.982871 kernel: iscsi: registered transport (qla4xxx) May 12 12:53:55.982895 kernel: QLogic iSCSI HBA Driver May 12 12:53:55.999008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 12:53:56.019212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 12:53:56.021868 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 12:53:56.064979 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 12:53:56.067209 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 12:53:56.125864 kernel: raid6: neonx8 gen() 15780 MB/s May 12 12:53:56.142866 kernel: raid6: neonx4 gen() 15807 MB/s May 12 12:53:56.159864 kernel: raid6: neonx2 gen() 13171 MB/s May 12 12:53:56.176900 kernel: raid6: neonx1 gen() 10426 MB/s May 12 12:53:56.193871 kernel: raid6: int64x8 gen() 6897 MB/s May 12 12:53:56.212856 kernel: raid6: int64x4 gen() 7337 MB/s May 12 12:53:56.227876 kernel: raid6: int64x2 gen() 6102 MB/s May 12 12:53:56.244867 kernel: raid6: int64x1 gen() 5055 MB/s May 12 12:53:56.244889 kernel: raid6: using algorithm neonx4 gen() 15807 MB/s May 12 12:53:56.261875 kernel: raid6: .... xor() 12381 MB/s, rmw enabled May 12 12:53:56.261903 kernel: raid6: using neon recovery algorithm May 12 12:53:56.266857 kernel: xor: measuring software checksum speed May 12 12:53:56.266872 kernel: 8regs : 21630 MB/sec May 12 12:53:56.268190 kernel: 32regs : 19128 MB/sec May 12 12:53:56.268214 kernel: arm64_neon : 28147 MB/sec May 12 12:53:56.268230 kernel: xor: using function: arm64_neon (28147 MB/sec) May 12 12:53:56.323870 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 12:53:56.330352 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 12:53:56.332740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 12:53:56.364134 systemd-udevd[496]: Using default interface naming scheme 'v255'. May 12 12:53:56.368161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 12:53:56.370483 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 12:53:56.396279 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation May 12 12:53:56.418089 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 12:53:56.420327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 12:53:56.470067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 12:53:56.473283 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 12:53:56.523861 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 12 12:53:56.539648 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 12:53:56.539768 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 12:53:56.539779 kernel: GPT:9289727 != 19775487 May 12 12:53:56.539788 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 12:53:56.539797 kernel: GPT:9289727 != 19775487 May 12 12:53:56.539805 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 12:53:56.539814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 12:53:56.535414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 12:53:56.535522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 12:53:56.537051 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 12:53:56.539058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 12:53:56.569294 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 12:53:56.570727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 12:53:56.572676 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 12:53:56.585673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 12:53:56.593137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 12:53:56.599167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 12:53:56.600329 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 12:53:56.602404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 12:53:56.605112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 12:53:56.606908 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 12:53:56.609296 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 12:53:56.611024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 12:53:56.635792 disk-uuid[589]: Primary Header is updated. May 12 12:53:56.635792 disk-uuid[589]: Secondary Entries is updated. May 12 12:53:56.635792 disk-uuid[589]: Secondary Header is updated. May 12 12:53:56.639867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 12:53:56.638473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 12:53:57.648907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 12:53:57.648961 disk-uuid[593]: The operation has completed successfully. May 12 12:53:57.674966 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 12:53:57.675906 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 12:53:57.699935 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 12:53:57.721523 sh[610]: Success May 12 12:53:57.736039 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 12:53:57.736069 kernel: device-mapper: uevent: version 1.0.3 May 12 12:53:57.736851 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 12 12:53:57.747975 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 12 12:53:57.771623 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 12:53:57.774437 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 12:53:57.789013 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 12:53:57.795388 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 12 12:53:57.795429 kernel: BTRFS: device fsid 95177d8d-2628-4ec0-8d1f-5081db0fb221 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (623) May 12 12:53:57.796421 kernel: BTRFS info (device dm-0): first mount of filesystem 95177d8d-2628-4ec0-8d1f-5081db0fb221 May 12 12:53:57.797132 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 12 12:53:57.797145 kernel: BTRFS info (device dm-0): using free-space-tree May 12 12:53:57.800743 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 12:53:57.802003 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 12 12:53:57.803395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 12:53:57.804129 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 12:53:57.805633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 12:53:57.830860 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (654) May 12 12:53:57.832581 kernel: BTRFS info (device vda6): first mount of filesystem ac1da43e-1625-4557-8fbb-3126fee79710 May 12 12:53:57.832611 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 12:53:57.832622 kernel: BTRFS info (device vda6): using free-space-tree May 12 12:53:57.839870 kernel: BTRFS info (device vda6): last unmount of filesystem ac1da43e-1625-4557-8fbb-3126fee79710 May 12 12:53:57.841232 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 12:53:57.843091 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 12:53:57.915664 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 12:53:57.919985 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 12:53:57.962063 systemd-networkd[801]: lo: Link UP May 12 12:53:57.962078 systemd-networkd[801]: lo: Gained carrier May 12 12:53:57.962974 systemd-networkd[801]: Enumeration completed May 12 12:53:57.963091 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 12:53:57.963592 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 12:53:57.963596 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 12:53:57.964364 systemd-networkd[801]: eth0: Link UP May 12 12:53:57.964367 systemd-networkd[801]: eth0: Gained carrier May 12 12:53:57.964377 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 12:53:57.964885 systemd[1]: Reached target network.target - Network. May 12 12:53:57.983428 ignition[701]: Ignition 2.21.0 May 12 12:53:57.983445 ignition[701]: Stage: fetch-offline May 12 12:53:57.983480 ignition[701]: no configs at "/usr/lib/ignition/base.d" May 12 12:53:57.983488 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:53:57.983672 ignition[701]: parsed url from cmdline: "" May 12 12:53:57.983675 ignition[701]: no config URL provided May 12 12:53:57.983680 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" May 12 12:53:57.983687 ignition[701]: no config at "/usr/lib/ignition/user.ign" May 12 12:53:57.983708 ignition[701]: op(1): [started] loading QEMU firmware config module May 12 12:53:57.988926 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 12:53:57.983712 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 12:53:57.998092 ignition[701]: op(1): [finished] loading QEMU firmware config module May 12 12:53:58.034481 ignition[701]: parsing config with SHA512: 8fb0c06b5582afe9343eb790e211953d79b8fad5f6f4d985bc4de28819e3e7c4fe8dfb78e48d0e7577d22fd83cfa709f29d426e1bbef13f6b08f2f383f42da63 May 12 12:53:58.040019 unknown[701]: fetched base config from "system" May 12 12:53:58.040032 unknown[701]: fetched user config from "qemu" May 12 12:53:58.040406 ignition[701]: fetch-offline: fetch-offline passed May 12 12:53:58.040459 ignition[701]: Ignition finished successfully May 12 12:53:58.042752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 12:53:58.044435 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 12:53:58.045192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 12:53:58.073473 ignition[812]: Ignition 2.21.0 May 12 12:53:58.073485 ignition[812]: Stage: kargs May 12 12:53:58.073619 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 12 12:53:58.073628 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:53:58.074393 ignition[812]: kargs: kargs passed May 12 12:53:58.074438 ignition[812]: Ignition finished successfully May 12 12:53:58.078366 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 12:53:58.080530 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 12:53:58.120954 ignition[820]: Ignition 2.21.0 May 12 12:53:58.120968 ignition[820]: Stage: disks May 12 12:53:58.121098 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 12 12:53:58.121107 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:53:58.122360 ignition[820]: disks: disks passed May 12 12:53:58.124047 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 12:53:58.122407 ignition[820]: Ignition finished successfully May 12 12:53:58.125911 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 12:53:58.127072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 12:53:58.128922 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 12:53:58.130417 systemd[1]: Reached target sysinit.target - System Initialization. May 12 12:53:58.132210 systemd[1]: Reached target basic.target - Basic System. May 12 12:53:58.134810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 12:53:58.159191 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 12 12:53:58.163156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 12:53:58.165745 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 12:53:58.228872 kernel: EXT4-fs (vda9): mounted filesystem 542f821e-2a33-4182-a417-ae9263f8f316 r/w with ordered data mode. Quota mode: none. May 12 12:53:58.228920 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 12:53:58.229952 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 12:53:58.232163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 12:53:58.233755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 12:53:58.234760 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 12:53:58.234798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 12:53:58.234821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 12:53:58.251163 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 12:53:58.253463 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 12:53:58.258046 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (838) May 12 12:53:58.258069 kernel: BTRFS info (device vda6): first mount of filesystem ac1da43e-1625-4557-8fbb-3126fee79710 May 12 12:53:58.258079 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 12:53:58.258088 kernel: BTRFS info (device vda6): using free-space-tree May 12 12:53:58.260670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 12:53:58.292102 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory May 12 12:53:58.295652 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory May 12 12:53:58.299367 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory May 12 12:53:58.302873 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory May 12 12:53:58.370751 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 12:53:58.372796 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 12:53:58.374436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 12:53:58.390123 kernel: BTRFS info (device vda6): last unmount of filesystem ac1da43e-1625-4557-8fbb-3126fee79710 May 12 12:53:58.403488 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 12:53:58.407924 ignition[951]: INFO : Ignition 2.21.0 May 12 12:53:58.407924 ignition[951]: INFO : Stage: mount May 12 12:53:58.409974 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 12:53:58.409974 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:53:58.409974 ignition[951]: INFO : mount: mount passed May 12 12:53:58.409974 ignition[951]: INFO : Ignition finished successfully May 12 12:53:58.411896 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 12:53:58.413719 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 12:53:58.794775 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 12:53:58.796254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 12:53:58.814879 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (964) May 12 12:53:58.814907 kernel: BTRFS info (device vda6): first mount of filesystem ac1da43e-1625-4557-8fbb-3126fee79710 May 12 12:53:58.814917 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 12:53:58.815967 kernel: BTRFS info (device vda6): using free-space-tree May 12 12:53:58.818385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 12:53:58.844600 ignition[981]: INFO : Ignition 2.21.0 May 12 12:53:58.844600 ignition[981]: INFO : Stage: files May 12 12:53:58.847032 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 12:53:58.847032 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:53:58.847032 ignition[981]: DEBUG : files: compiled without relabeling support, skipping May 12 12:53:58.850231 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 12:53:58.850231 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 12:53:58.850231 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 12:53:58.850231 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 12:53:58.850231 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 12:53:58.849180 unknown[981]: wrote ssh authorized keys file for user: core May 12 12:53:58.857262 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 12 12:53:58.857262 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 12 12:53:58.928903 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 12:53:59.238675 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 12 12:53:59.238675 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 12:53:59.242398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 12:53:59.253928 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 12 12:53:59.481996 systemd-networkd[801]: eth0: Gained IPv6LL May 12 12:53:59.570180 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 12 12:53:59.911574 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 12:53:59.911574 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 12 12:53:59.915475 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 12 12:53:59.932018 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 12:53:59.935860 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 12:53:59.937914 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 12 12:53:59.937914 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 12 12:53:59.937914 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 12 12:53:59.937914 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 12:53:59.937914 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 12:53:59.937914 ignition[981]: INFO : files: files passed May 12 12:53:59.937914 ignition[981]: INFO : Ignition finished successfully May 12 12:53:59.939002 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 12:53:59.941587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 12:53:59.943616 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 12:53:59.962477 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory May 12 12:53:59.962754 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 12:53:59.963869 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 12:53:59.966822 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 12:53:59.966822 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 12:53:59.969633 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 12:53:59.968636 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 12:53:59.971351 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 12:53:59.974078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 12:54:00.000780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 12:54:00.000899 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 12:54:00.002232 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 12:54:00.003846 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 12:54:00.005788 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 12:54:00.006518 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 12:54:00.020767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 12:54:00.023049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 12:54:00.046262 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 12:54:00.047472 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 12:54:00.049268 systemd[1]: Stopped target timers.target - Timer Units. May 12 12:54:00.050746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 12:54:00.050890 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 12:54:00.053132 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 12:54:00.054872 systemd[1]: Stopped target basic.target - Basic System. May 12 12:54:00.056530 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 12:54:00.058015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 12:54:00.059672 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 12:54:00.061496 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 12 12:54:00.063160 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 12:54:00.064719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 12:54:00.066537 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 12:54:00.068311 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 12:54:00.069873 systemd[1]: Stopped target swap.target - Swaps. May 12 12:54:00.071417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 12:54:00.071533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 12:54:00.073570 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 12:54:00.074678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 12:54:00.076468 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 12:54:00.076575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 12:54:00.078246 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 12:54:00.078350 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 12:54:00.080607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 12:54:00.080711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 12:54:00.082921 systemd[1]: Stopped target paths.target - Path Units. May 12 12:54:00.084396 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 12:54:00.087904 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 12:54:00.089601 systemd[1]: Stopped target slices.target - Slice Units. May 12 12:54:00.091514 systemd[1]: Stopped target sockets.target - Socket Units. May 12 12:54:00.093064 systemd[1]: iscsid.socket: Deactivated successfully. May 12 12:54:00.093147 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 12:54:00.094663 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 12:54:00.094741 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 12:54:00.096252 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 12:54:00.096361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 12:54:00.098047 systemd[1]: ignition-files.service: Deactivated successfully. May 12 12:54:00.098147 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 12:54:00.100454 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 12:54:00.102903 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 12:54:00.104117 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 12:54:00.104284 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 12:54:00.106069 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 12:54:00.106207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 12:54:00.112225 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 12:54:00.113964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 12:54:00.121627 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 12:54:00.126189 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 12:54:00.126872 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 12:54:00.128617 ignition[1036]: INFO : Ignition 2.21.0 May 12 12:54:00.128617 ignition[1036]: INFO : Stage: umount May 12 12:54:00.128617 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 12:54:00.128617 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 12:54:00.131670 ignition[1036]: INFO : umount: umount passed May 12 12:54:00.131670 ignition[1036]: INFO : Ignition finished successfully May 12 12:54:00.131271 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 12:54:00.131400 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 12:54:00.132768 systemd[1]: Stopped target network.target - Network. May 12 12:54:00.134090 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 12:54:00.134146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 12:54:00.135577 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 12:54:00.135618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 12:54:00.137091 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 12:54:00.137142 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 12:54:00.138576 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 12:54:00.138615 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 12:54:00.140080 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 12:54:00.140128 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 12:54:00.141718 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 12:54:00.143270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 12:54:00.149341 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 12:54:00.149458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 12:54:00.153987 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 12:54:00.154572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 12:54:00.154650 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 12:54:00.157784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 12:54:00.158001 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 12:54:00.158125 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 12:54:00.162618 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 12:54:00.162747 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 12 12:54:00.163911 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 12:54:00.163951 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 12:54:00.166279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 12:54:00.167348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 12:54:00.167404 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 12:54:00.169368 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 12:54:00.169411 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 12:54:00.171982 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 12:54:00.172022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 12:54:00.173813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 12:54:00.177717 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 12:54:00.190854 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 12:54:00.191025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 12:54:00.192992 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 12:54:00.193119 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 12:54:00.195199 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 12:54:00.195256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 12:54:00.196513 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 12:54:00.196546 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 12:54:00.198389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 12:54:00.198443 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 12:54:00.200623 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 12:54:00.200675 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 12:54:00.203258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 12:54:00.203319 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 12:54:00.206493 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 12:54:00.208267 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 12 12:54:00.208325 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 12 12:54:00.211239 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 12:54:00.211288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 12:54:00.214071 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 12 12:54:00.214113 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 12:54:00.217110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 12:54:00.217152 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 12:54:00.219179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 12:54:00.219228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 12:54:00.222769 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 12:54:00.222892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 12:54:00.224360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 12:54:00.226455 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 12:54:00.250468 systemd[1]: Switching root. May 12 12:54:00.279808 systemd-journald[243]: Journal stopped May 12 12:54:01.188369 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 12 12:54:01.188426 kernel: SELinux: policy capability network_peer_controls=1 May 12 12:54:01.188438 kernel: SELinux: policy capability open_perms=1 May 12 12:54:01.188447 kernel: SELinux: policy capability extended_socket_class=1 May 12 12:54:01.188461 kernel: SELinux: policy capability always_check_network=0 May 12 12:54:01.188473 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 12:54:01.188488 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 12:54:01.188497 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 12:54:01.188512 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 12:54:01.188522 kernel: SELinux: policy capability userspace_initial_context=0 May 12 12:54:01.188531 kernel: audit: type=1403 audit(1747054440.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 12:54:01.188543 systemd[1]: Successfully loaded SELinux policy in 47.120ms. May 12 12:54:01.188561 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.132ms. May 12 12:54:01.188573 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 12:54:01.188589 systemd[1]: Detected virtualization kvm. May 12 12:54:01.188599 systemd[1]: Detected architecture arm64. May 12 12:54:01.188609 systemd[1]: Detected first boot. May 12 12:54:01.188619 systemd[1]: Initializing machine ID from VM UUID. May 12 12:54:01.188630 zram_generator::config[1083]: No configuration found. May 12 12:54:01.188640 kernel: NET: Registered PF_VSOCK protocol family May 12 12:54:01.188652 systemd[1]: Populated /etc with preset unit settings. May 12 12:54:01.188665 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 12:54:01.188675 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 12:54:01.188685 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 12:54:01.188695 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 12:54:01.188706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 12:54:01.188716 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 12:54:01.188726 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 12:54:01.188738 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 12:54:01.188749 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 12:54:01.188759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 12:54:01.188770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 12:54:01.188780 systemd[1]: Created slice user.slice - User and Session Slice. May 12 12:54:01.188791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 12:54:01.188801 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 12:54:01.188812 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 12:54:01.188848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 12:54:01.188866 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 12:54:01.188878 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 12:54:01.188888 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 12 12:54:01.188900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 12:54:01.188910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 12:54:01.188920 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 12:54:01.188931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 12:54:01.188941 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 12:54:01.188953 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 12:54:01.188964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 12:54:01.188975 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 12:54:01.188985 systemd[1]: Reached target slices.target - Slice Units. May 12 12:54:01.188995 systemd[1]: Reached target swap.target - Swaps. May 12 12:54:01.189005 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 12:54:01.189016 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 12:54:01.189026 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 12:54:01.189037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 12:54:01.189048 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 12:54:01.189059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 12:54:01.189069 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 12:54:01.189079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 12:54:01.189090 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 12:54:01.189100 systemd[1]: Mounting media.mount - External Media Directory... May 12 12:54:01.189110 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 12:54:01.189120 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 12:54:01.189130 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 12:54:01.189142 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 12:54:01.189152 systemd[1]: Reached target machines.target - Containers. May 12 12:54:01.189163 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 12:54:01.189173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 12:54:01.189183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 12:54:01.189193 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 12:54:01.189203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 12:54:01.189213 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 12:54:01.189225 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 12:54:01.189235 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 12:54:01.189245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 12:54:01.189256 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 12:54:01.189266 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 12:54:01.189276 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 12:54:01.189286 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 12:54:01.189296 systemd[1]: Stopped systemd-fsck-usr.service. May 12 12:54:01.189307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 12:54:01.189319 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 12:54:01.189329 kernel: loop: module loaded May 12 12:54:01.189339 kernel: fuse: init (API version 7.41) May 12 12:54:01.189349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 12:54:01.189359 kernel: ACPI: bus type drm_connector registered May 12 12:54:01.189369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 12:54:01.189380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 12:54:01.189390 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 12:54:01.189402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 12:54:01.189412 systemd[1]: verity-setup.service: Deactivated successfully. May 12 12:54:01.189422 systemd[1]: Stopped verity-setup.service. May 12 12:54:01.189432 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 12:54:01.189442 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 12:54:01.189472 systemd-journald[1158]: Collecting audit messages is disabled. May 12 12:54:01.189496 systemd[1]: Mounted media.mount - External Media Directory. May 12 12:54:01.189507 systemd-journald[1158]: Journal started May 12 12:54:01.189529 systemd-journald[1158]: Runtime Journal (/run/log/journal/4be0459a59264d5ebcd1a92dbe05a1f4) is 6M, max 48.5M, 42.4M free. May 12 12:54:01.193884 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 12:54:01.193916 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 12:54:00.987371 systemd[1]: Queued start job for default target multi-user.target. May 12 12:54:01.007704 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 12:54:01.008097 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 12:54:01.195601 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 12:54:01.197392 systemd[1]: Started systemd-journald.service - Journal Service. May 12 12:54:01.198875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 12:54:01.200334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 12:54:01.201808 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 12:54:01.202005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 12:54:01.203370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 12:54:01.203541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 12:54:01.206185 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 12:54:01.206362 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 12:54:01.207739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 12:54:01.207941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 12:54:01.209331 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 12:54:01.209488 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 12:54:01.210854 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 12:54:01.211033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 12:54:01.212478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 12:54:01.214025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 12:54:01.215521 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 12:54:01.217137 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 12:54:01.228714 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 12:54:01.231354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 12:54:01.233452 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 12:54:01.234612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 12:54:01.234641 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 12:54:01.236609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 12:54:01.241089 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 12:54:01.242340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 12:54:01.243501 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 12:54:01.245394 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 12:54:01.246581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 12:54:01.247595 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 12:54:01.248687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 12:54:01.252902 systemd-journald[1158]: Time spent on flushing to /var/log/journal/4be0459a59264d5ebcd1a92dbe05a1f4 is 21.821ms for 881 entries. May 12 12:54:01.252902 systemd-journald[1158]: System Journal (/var/log/journal/4be0459a59264d5ebcd1a92dbe05a1f4) is 8M, max 195.6M, 187.6M free. May 12 12:54:01.283674 systemd-journald[1158]: Received client request to flush runtime journal. May 12 12:54:01.283716 kernel: loop0: detected capacity change from 0 to 201592 May 12 12:54:01.283728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 12:54:01.253067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 12:54:01.256199 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 12:54:01.258331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 12:54:01.262336 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 12:54:01.264412 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 12:54:01.266148 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 12:54:01.274251 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 12:54:01.279519 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 12:54:01.283096 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 12:54:01.284680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 12:54:01.287298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 12:54:01.298043 kernel: loop1: detected capacity change from 0 to 138376 May 12 12:54:01.298264 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 12 12:54:01.298282 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 12 12:54:01.305282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 12:54:01.308927 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 12:54:01.321040 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 12:54:01.329906 kernel: loop2: detected capacity change from 0 to 107312 May 12 12:54:01.343903 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 12:54:01.346560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 12:54:01.354863 kernel: loop3: detected capacity change from 0 to 201592 May 12 12:54:01.364918 kernel: loop4: detected capacity change from 0 to 138376 May 12 12:54:01.367628 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 12 12:54:01.367651 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 12 12:54:01.371861 kernel: loop5: detected capacity change from 0 to 107312 May 12 12:54:01.372038 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 12:54:01.376525 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 12:54:01.376913 (sd-merge)[1222]: Merged extensions into '/usr'. May 12 12:54:01.380441 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 12 12:54:01.380462 systemd[1]: Reloading... May 12 12:54:01.454875 zram_generator::config[1249]: No configuration found. May 12 12:54:01.516902 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 12:54:01.530071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 12:54:01.604646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 12:54:01.604861 systemd[1]: Reloading finished in 223 ms. May 12 12:54:01.639980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 12:54:01.641439 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 12:54:01.655064 systemd[1]: Starting ensure-sysext.service... May 12 12:54:01.656805 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 12:54:01.666632 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... May 12 12:54:01.666651 systemd[1]: Reloading... May 12 12:54:01.684018 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 12 12:54:01.684088 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 12 12:54:01.684415 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 12:54:01.684644 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 12:54:01.685294 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 12:54:01.685494 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 12 12:54:01.685541 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 12 12:54:01.688148 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 12 12:54:01.688163 systemd-tmpfiles[1285]: Skipping /boot May 12 12:54:01.696891 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 12 12:54:01.696905 systemd-tmpfiles[1285]: Skipping /boot May 12 12:54:01.720861 zram_generator::config[1313]: No configuration found. May 12 12:54:01.790889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 12:54:01.865594 systemd[1]: Reloading finished in 198 ms. May 12 12:54:01.875669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 12:54:01.892011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 12:54:01.900277 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 12:54:01.902714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 12:54:01.914047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 12:54:01.917190 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 12:54:01.920144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 12:54:01.922914 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 12:54:01.927320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 12:54:01.929111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 12:54:01.936883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 12:54:01.942053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 12:54:01.943151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 12:54:01.943270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 12:54:01.945315 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 12:54:01.947486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 12:54:01.947636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 12:54:01.949426 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 12:54:01.951171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 12:54:01.951314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 12:54:01.958378 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 12:54:01.958556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 12:54:01.961040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 12:54:01.964459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 12:54:01.967365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 12:54:01.970050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 12:54:01.970237 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 12:54:01.973929 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 12 12:54:01.980663 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 12:54:01.984262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 12:54:01.987680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 12:54:01.987853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 12:54:01.990368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 12:54:01.992283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 12:54:01.992466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 12:54:01.997878 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 12:54:02.001786 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 12:54:02.010008 augenrules[1413]: No rules May 12 12:54:02.014358 systemd[1]: audit-rules.service: Deactivated successfully. May 12 12:54:02.014616 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 12:54:02.025248 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 12:54:02.042224 systemd[1]: Finished ensure-sysext.service. May 12 12:54:02.052745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 12:54:02.054263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 12:54:02.057007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 12:54:02.059016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 12:54:02.068294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 12:54:02.070065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 12:54:02.070102 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 12:54:02.071704 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 12:54:02.077065 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 12:54:02.078484 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 12:54:02.082155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 12:54:02.082333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 12:54:02.083910 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 12:54:02.084090 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 12:54:02.087212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 12:54:02.087375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 12:54:02.088882 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 12:54:02.089058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 12:54:02.094533 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 12 12:54:02.100747 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 12:54:02.100807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 12:54:02.121638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 12:54:02.124311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 12:54:02.149929 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 12:54:02.190462 systemd-networkd[1440]: lo: Link UP May 12 12:54:02.190470 systemd-networkd[1440]: lo: Gained carrier May 12 12:54:02.194809 systemd-networkd[1440]: Enumeration completed May 12 12:54:02.194934 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 12:54:02.195350 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 12:54:02.195360 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 12:54:02.195797 systemd-networkd[1440]: eth0: Link UP May 12 12:54:02.198066 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 12:54:02.199515 systemd-networkd[1440]: eth0: Gained carrier May 12 12:54:02.199529 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 12:54:02.200535 systemd-resolved[1352]: Positive Trust Anchors: May 12 12:54:02.200554 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 12:54:02.200585 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 12:54:02.202096 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 12:54:02.209276 systemd-resolved[1352]: Defaulting to hostname 'linux'. May 12 12:54:02.211041 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 12:54:02.213973 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 12:54:02.216024 systemd[1]: Reached target network.target - Network. May 12 12:54:02.217120 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 12:54:02.218452 systemd[1]: Reached target time-set.target - System Time Set. May 12 12:54:02.221316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 12:54:02.224886 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 12:54:02.225430 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. May 12 12:54:02.226814 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 12:54:02.226883 systemd-timesyncd[1441]: Initial clock synchronization to Mon 2025-05-12 12:54:02.323418 UTC. May 12 12:54:02.232430 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 12:54:02.269008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 12:54:02.270354 systemd[1]: Reached target sysinit.target - System Initialization. May 12 12:54:02.272099 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 12:54:02.273306 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 12:54:02.274742 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 12:54:02.275920 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 12:54:02.277120 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 12:54:02.278313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 12:54:02.278354 systemd[1]: Reached target paths.target - Path Units. May 12 12:54:02.279246 systemd[1]: Reached target timers.target - Timer Units. May 12 12:54:02.281461 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 12:54:02.283751 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 12:54:02.287513 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 12:54:02.288926 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 12:54:02.290138 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 12:54:02.293616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 12:54:02.295254 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 12:54:02.296902 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 12:54:02.297997 systemd[1]: Reached target sockets.target - Socket Units. May 12 12:54:02.298921 systemd[1]: Reached target basic.target - Basic System. May 12 12:54:02.299818 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 12:54:02.299871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 12:54:02.300719 systemd[1]: Starting containerd.service - containerd container runtime... May 12 12:54:02.302614 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 12:54:02.304420 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 12:54:02.306401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 12:54:02.308568 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 12:54:02.309606 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 12:54:02.311995 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 12:54:02.315268 jq[1481]: false May 12 12:54:02.315226 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 12:54:02.318440 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 12:54:02.321968 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 12:54:02.326988 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 12:54:02.328803 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 12:54:02.329220 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 12:54:02.330004 systemd[1]: Starting update-engine.service - Update Engine... May 12 12:54:02.333199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 12:54:02.334665 extend-filesystems[1482]: Found loop3 May 12 12:54:02.335582 extend-filesystems[1482]: Found loop4 May 12 12:54:02.335582 extend-filesystems[1482]: Found loop5 May 12 12:54:02.335582 extend-filesystems[1482]: Found vda May 12 12:54:02.335582 extend-filesystems[1482]: Found vda1 May 12 12:54:02.335582 extend-filesystems[1482]: Found vda2 May 12 12:54:02.335582 extend-filesystems[1482]: Found vda3 May 12 12:54:02.335582 extend-filesystems[1482]: Found usr May 12 12:54:02.350294 extend-filesystems[1482]: Found vda4 May 12 12:54:02.350294 extend-filesystems[1482]: Found vda6 May 12 12:54:02.350294 extend-filesystems[1482]: Found vda7 May 12 12:54:02.350294 extend-filesystems[1482]: Found vda9 May 12 12:54:02.350294 extend-filesystems[1482]: Checking size of /dev/vda9 May 12 12:54:02.338892 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 12:54:02.340147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 12:54:02.356283 jq[1497]: true May 12 12:54:02.340307 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 12:54:02.340549 systemd[1]: motdgen.service: Deactivated successfully. May 12 12:54:02.340694 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 12:54:02.343582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 12:54:02.344863 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 12:54:02.361727 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 12:54:02.370579 jq[1502]: true May 12 12:54:02.384518 tar[1501]: linux-arm64/LICENSE May 12 12:54:02.385243 tar[1501]: linux-arm64/helm May 12 12:54:02.386362 extend-filesystems[1482]: Resized partition /dev/vda9 May 12 12:54:02.389201 extend-filesystems[1523]: resize2fs 1.47.2 (1-Jan-2025) May 12 12:54:02.395126 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 12:54:02.404176 update_engine[1495]: I20250512 12:54:02.404033 1495 main.cc:92] Flatcar Update Engine starting May 12 12:54:02.413582 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 12:54:02.422286 dbus-daemon[1479]: [system] SELinux support is enabled May 12 12:54:02.422450 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 12:54:02.431810 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 12:54:02.435462 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 12:54:02.435462 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 12:54:02.435462 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 12:54:02.432536 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 12:54:02.442775 extend-filesystems[1482]: Resized filesystem in /dev/vda9 May 12 12:54:02.434495 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 12:54:02.448453 update_engine[1495]: I20250512 12:54:02.447227 1495 update_check_scheduler.cc:74] Next update check in 6m29s May 12 12:54:02.434511 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 12:54:02.438113 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 12:54:02.438351 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 12:54:02.447169 systemd[1]: Started update-engine.service - Update Engine. May 12 12:54:02.451049 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 12:54:02.483728 bash[1535]: Updated "/home/core/.ssh/authorized_keys" May 12 12:54:02.485985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 12:54:02.487719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 12:54:02.499780 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (Power Button) May 12 12:54:02.499991 systemd-logind[1493]: New seat seat0. May 12 12:54:02.500553 systemd[1]: Started systemd-logind.service - User Login Management. May 12 12:54:02.553121 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 12:54:02.628602 containerd[1503]: time="2025-05-12T12:54:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 12 12:54:02.630449 containerd[1503]: time="2025-05-12T12:54:02.630412560Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 12 12:54:02.639461 containerd[1503]: time="2025-05-12T12:54:02.639414800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" May 12 12:54:02.639632 containerd[1503]: time="2025-05-12T12:54:02.639567240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 12 12:54:02.639632 containerd[1503]: time="2025-05-12T12:54:02.639602280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 12 12:54:02.639863 containerd[1503]: time="2025-05-12T12:54:02.639813680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 12 12:54:02.639941 containerd[1503]: time="2025-05-12T12:54:02.639926760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 12 12:54:02.640011 containerd[1503]: time="2025-05-12T12:54:02.639994200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 12:54:02.640142 containerd[1503]: time="2025-05-12T12:54:02.640117040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 12:54:02.640450 containerd[1503]: time="2025-05-12T12:54:02.640189160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 12:54:02.640591 containerd[1503]: time="2025-05-12T12:54:02.640538520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 12:54:02.640665 containerd[1503]: time="2025-05-12T12:54:02.640649920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 12:54:02.640716 containerd[1503]: time="2025-05-12T12:54:02.640704040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 12:54:02.640773 containerd[1503]: time="2025-05-12T12:54:02.640761280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 12 12:54:02.640951 containerd[1503]: time="2025-05-12T12:54:02.640929600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 12 12:54:02.641210 containerd[1503]: time="2025-05-12T12:54:02.641186800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 12:54:02.641298 containerd[1503]: time="2025-05-12T12:54:02.641281520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 12:54:02.641349 containerd[1503]: time="2025-05-12T12:54:02.641335440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 12 12:54:02.641420 containerd[1503]: time="2025-05-12T12:54:02.641407280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 12 12:54:02.641656 containerd[1503]: time="2025-05-12T12:54:02.641635400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 12 12:54:02.641785 containerd[1503]: time="2025-05-12T12:54:02.641767560Z" level=info msg="metadata content store policy set" policy=shared May 12 12:54:02.644894 containerd[1503]: time="2025-05-12T12:54:02.644863560Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 12 12:54:02.645010 containerd[1503]: time="2025-05-12T12:54:02.644995640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 12 12:54:02.645064 containerd[1503]: time="2025-05-12T12:54:02.645052480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 12 12:54:02.645112 containerd[1503]: time="2025-05-12T12:54:02.645100520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 12 12:54:02.645157 containerd[1503]: time="2025-05-12T12:54:02.645146000Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 12 12:54:02.645205 containerd[1503]: time="2025-05-12T12:54:02.645193040Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 12 12:54:02.645280 containerd[1503]: time="2025-05-12T12:54:02.645266520Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 12 12:54:02.645331 containerd[1503]: time="2025-05-12T12:54:02.645319720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 12 12:54:02.645379 containerd[1503]: time="2025-05-12T12:54:02.645366560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 12 12:54:02.645429 containerd[1503]: time="2025-05-12T12:54:02.645416280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 12 12:54:02.645478 containerd[1503]: time="2025-05-12T12:54:02.645465840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 12 12:54:02.645534 containerd[1503]: time="2025-05-12T12:54:02.645522480Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 12 12:54:02.645698 containerd[1503]: time="2025-05-12T12:54:02.645679040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 12 12:54:02.645783 containerd[1503]: time="2025-05-12T12:54:02.645768760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 12 12:54:02.645871 containerd[1503]: time="2025-05-12T12:54:02.645856680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 12 12:54:02.645925 containerd[1503]: time="2025-05-12T12:54:02.645913600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 12 12:54:02.645974 containerd[1503]: time="2025-05-12T12:54:02.645963120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 12 12:54:02.646021 containerd[1503]: time="2025-05-12T12:54:02.646009560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 12 12:54:02.646093 containerd[1503]: time="2025-05-12T12:54:02.646081200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 12 12:54:02.646146 containerd[1503]: time="2025-05-12T12:54:02.646135040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 12 12:54:02.646195 containerd[1503]: time="2025-05-12T12:54:02.646183440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 12 12:54:02.646247 containerd[1503]: time="2025-05-12T12:54:02.646234840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 12 12:54:02.646297 containerd[1503]: time="2025-05-12T12:54:02.646285760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 12 12:54:02.646550 containerd[1503]: time="2025-05-12T12:54:02.646532320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 12 12:54:02.646609 containerd[1503]: time="2025-05-12T12:54:02.646598720Z" level=info msg="Start snapshots syncer" May 12 12:54:02.646680 containerd[1503]: time="2025-05-12T12:54:02.646666680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 12 12:54:02.646965 containerd[1503]: time="2025-05-12T12:54:02.646929000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 12 12:54:02.647117 containerd[1503]: time="2025-05-12T12:54:02.647099760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 12 12:54:02.647279 containerd[1503]: time="2025-05-12T12:54:02.647259080Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 12 12:54:02.647438 containerd[1503]: time="2025-05-12T12:54:02.647417040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 12 12:54:02.647521 containerd[1503]: time="2025-05-12T12:54:02.647505320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647573200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647593320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647606760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647616880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647627600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647652560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647664040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647675160Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647727040Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647742840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647752200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647761760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647769280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 12 12:54:02.648128 containerd[1503]: time="2025-05-12T12:54:02.647781080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647791040Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647887320Z" level=info msg="runtime interface created" May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647895160Z" level=info msg="created NRI interface" May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647904400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647915880Z" level=info msg="Connect containerd service" May 12 12:54:02.648412 containerd[1503]: time="2025-05-12T12:54:02.647944080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 12:54:02.648645 containerd[1503]: time="2025-05-12T12:54:02.648616400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 12:54:02.756233 containerd[1503]: time="2025-05-12T12:54:02.756170120Z" level=info msg="Start subscribing containerd event" May 12 12:54:02.756513 containerd[1503]: time="2025-05-12T12:54:02.756358360Z" level=info msg="Start recovering state" May 12 12:54:02.756513 containerd[1503]: time="2025-05-12T12:54:02.756494960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 12:54:02.756602 containerd[1503]: time="2025-05-12T12:54:02.756542840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 12:54:02.756883 containerd[1503]: time="2025-05-12T12:54:02.756816520Z" level=info msg="Start event monitor" May 12 12:54:02.757048 containerd[1503]: time="2025-05-12T12:54:02.757028480Z" level=info msg="Start cni network conf syncer for default" May 12 12:54:02.757160 containerd[1503]: time="2025-05-12T12:54:02.757102880Z" level=info msg="Start streaming server" May 12 12:54:02.757160 containerd[1503]: time="2025-05-12T12:54:02.757118120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 12 12:54:02.757160 containerd[1503]: time="2025-05-12T12:54:02.757125760Z" level=info msg="runtime interface starting up..." May 12 12:54:02.757160 containerd[1503]: time="2025-05-12T12:54:02.757133520Z" level=info msg="starting plugins..." May 12 12:54:02.757497 containerd[1503]: time="2025-05-12T12:54:02.757317200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 12 12:54:02.757876 systemd[1]: Started containerd.service - containerd container runtime. May 12 12:54:02.759504 containerd[1503]: time="2025-05-12T12:54:02.757738120Z" level=info msg="containerd successfully booted in 0.129474s" May 12 12:54:02.835296 tar[1501]: linux-arm64/README.md May 12 12:54:02.850155 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 12:54:03.269417 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 12:54:03.288480 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 12:54:03.293409 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 12:54:03.316019 systemd[1]: issuegen.service: Deactivated successfully. May 12 12:54:03.316207 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 12:54:03.320740 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 12:54:03.349915 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 12:54:03.352767 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 12:54:03.355081 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 12 12:54:03.356579 systemd[1]: Reached target getty.target - Login Prompts. May 12 12:54:03.963855 systemd-networkd[1440]: eth0: Gained IPv6LL May 12 12:54:03.967941 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 12:54:03.970654 systemd[1]: Reached target network-online.target - Network is Online. May 12 12:54:03.973654 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 12:54:03.976182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:03.984083 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 12:54:04.001614 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 12:54:04.003182 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 12:54:04.003395 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 12:54:04.005908 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 12:54:04.497805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:04.499424 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 12:54:04.502470 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 12:54:04.503983 systemd[1]: Startup finished in 2.083s (kernel) + 4.949s (initrd) + 3.973s (userspace) = 11.006s. May 12 12:54:04.907257 kubelet[1609]: E0512 12:54:04.907139 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 12:54:04.909415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 12:54:04.909556 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 12:54:04.910065 systemd[1]: kubelet.service: Consumed 785ms CPU time, 248.5M memory peak. May 12 12:54:08.111186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 12:54:08.112306 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:57068.service - OpenSSH per-connection server daemon (10.0.0.1:57068). May 12 12:54:08.180813 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 57068 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:08.182278 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:08.190045 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 12:54:08.191050 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 12:54:08.197768 systemd-logind[1493]: New session 1 of user core. May 12 12:54:08.224281 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 12:54:08.226470 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 12:54:08.245540 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 12:54:08.247543 systemd-logind[1493]: New session c1 of user core. May 12 12:54:08.355510 systemd[1627]: Queued start job for default target default.target. May 12 12:54:08.367781 systemd[1627]: Created slice app.slice - User Application Slice. May 12 12:54:08.367811 systemd[1627]: Reached target paths.target - Paths. May 12 12:54:08.367865 systemd[1627]: Reached target timers.target - Timers. May 12 12:54:08.368991 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 12:54:08.377095 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 12:54:08.377160 systemd[1627]: Reached target sockets.target - Sockets. May 12 12:54:08.377206 systemd[1627]: Reached target basic.target - Basic System. May 12 12:54:08.377239 systemd[1627]: Reached target default.target - Main User Target. May 12 12:54:08.377267 systemd[1627]: Startup finished in 124ms. May 12 12:54:08.377408 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 12:54:08.378747 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 12:54:08.438175 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). May 12 12:54:08.486570 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:08.487694 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:08.492006 systemd-logind[1493]: New session 2 of user core. May 12 12:54:08.500999 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 12:54:08.551277 sshd[1640]: Connection closed by 10.0.0.1 port 57078 May 12 12:54:08.551626 sshd-session[1638]: pam_unix(sshd:session): session closed for user core May 12 12:54:08.563897 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:57078.service: Deactivated successfully. May 12 12:54:08.565260 systemd[1]: session-2.scope: Deactivated successfully. May 12 12:54:08.566172 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. May 12 12:54:08.568163 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:57086.service - OpenSSH per-connection server daemon (10.0.0.1:57086). May 12 12:54:08.569210 systemd-logind[1493]: Removed session 2. May 12 12:54:08.617492 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 57086 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:08.618602 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:08.622455 systemd-logind[1493]: New session 3 of user core. May 12 12:54:08.640057 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 12:54:08.686922 sshd[1648]: Connection closed by 10.0.0.1 port 57086 May 12 12:54:08.687326 sshd-session[1646]: pam_unix(sshd:session): session closed for user core May 12 12:54:08.699981 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:57086.service: Deactivated successfully. May 12 12:54:08.701221 systemd[1]: session-3.scope: Deactivated successfully. May 12 12:54:08.701917 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. May 12 12:54:08.704106 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). May 12 12:54:08.704775 systemd-logind[1493]: Removed session 3. May 12 12:54:08.750157 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:08.751171 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:08.754900 systemd-logind[1493]: New session 4 of user core. May 12 12:54:08.764987 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 12:54:08.814506 sshd[1656]: Connection closed by 10.0.0.1 port 57090 May 12 12:54:08.814851 sshd-session[1654]: pam_unix(sshd:session): session closed for user core May 12 12:54:08.824937 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:57090.service: Deactivated successfully. May 12 12:54:08.826249 systemd[1]: session-4.scope: Deactivated successfully. May 12 12:54:08.826960 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. May 12 12:54:08.829098 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:57106.service - OpenSSH per-connection server daemon (10.0.0.1:57106). May 12 12:54:08.829678 systemd-logind[1493]: Removed session 4. May 12 12:54:08.866375 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 57106 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:08.867397 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:08.871288 systemd-logind[1493]: New session 5 of user core. May 12 12:54:08.876993 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 12:54:08.936232 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 12:54:08.936505 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 12:54:08.952492 sudo[1665]: pam_unix(sudo:session): session closed for user root May 12 12:54:08.953868 sshd[1664]: Connection closed by 10.0.0.1 port 57106 May 12 12:54:08.954263 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 12 12:54:08.969041 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:57106.service: Deactivated successfully. May 12 12:54:08.971166 systemd[1]: session-5.scope: Deactivated successfully. May 12 12:54:08.971903 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. May 12 12:54:08.974342 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). May 12 12:54:08.975241 systemd-logind[1493]: Removed session 5. May 12 12:54:09.028470 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:09.029657 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:09.034283 systemd-logind[1493]: New session 6 of user core. May 12 12:54:09.041064 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 12:54:09.090038 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 12:54:09.090303 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 12:54:09.094404 sudo[1675]: pam_unix(sudo:session): session closed for user root May 12 12:54:09.098558 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 12:54:09.098794 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 12:54:09.106440 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 12:54:09.143406 augenrules[1697]: No rules May 12 12:54:09.144442 systemd[1]: audit-rules.service: Deactivated successfully. May 12 12:54:09.144663 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 12:54:09.145721 sudo[1674]: pam_unix(sudo:session): session closed for user root May 12 12:54:09.146796 sshd[1673]: Connection closed by 10.0.0.1 port 57122 May 12 12:54:09.147235 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 12 12:54:09.156833 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:57122.service: Deactivated successfully. May 12 12:54:09.160202 systemd[1]: session-6.scope: Deactivated successfully. May 12 12:54:09.160932 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. May 12 12:54:09.163138 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:57132.service - OpenSSH per-connection server daemon (10.0.0.1:57132). May 12 12:54:09.163712 systemd-logind[1493]: Removed session 6. May 12 12:54:09.209486 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57132 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:54:09.210492 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:54:09.214735 systemd-logind[1493]: New session 7 of user core. May 12 12:54:09.224029 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 12:54:09.274133 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 12:54:09.274639 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 12:54:09.630362 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 12:54:09.652232 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 12:54:09.917602 dockerd[1730]: time="2025-05-12T12:54:09.917031824Z" level=info msg="Starting up" May 12 12:54:09.918561 dockerd[1730]: time="2025-05-12T12:54:09.918527089Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 12 12:54:10.026307 dockerd[1730]: time="2025-05-12T12:54:10.026256774Z" level=info msg="Loading containers: start." May 12 12:54:10.038250 kernel: Initializing XFRM netlink socket May 12 12:54:10.228631 systemd-networkd[1440]: docker0: Link UP May 12 12:54:10.231477 dockerd[1730]: time="2025-05-12T12:54:10.231437654Z" level=info msg="Loading containers: done." May 12 12:54:10.244605 dockerd[1730]: time="2025-05-12T12:54:10.244558982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 12:54:10.244721 dockerd[1730]: time="2025-05-12T12:54:10.244635141Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 12 12:54:10.244747 dockerd[1730]: time="2025-05-12T12:54:10.244727377Z" level=info msg="Initializing buildkit" May 12 12:54:10.264551 dockerd[1730]: time="2025-05-12T12:54:10.264517197Z" level=info msg="Completed buildkit initialization" May 12 12:54:10.270231 dockerd[1730]: time="2025-05-12T12:54:10.270188421Z" level=info msg="Daemon has completed initialization" May 12 12:54:10.270312 dockerd[1730]: time="2025-05-12T12:54:10.270244486Z" level=info msg="API listen on /run/docker.sock" May 12 12:54:10.270482 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 12:54:10.910961 containerd[1503]: time="2025-05-12T12:54:10.910900608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 12 12:54:11.558909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099562499.mount: Deactivated successfully. May 12 12:54:12.531484 containerd[1503]: time="2025-05-12T12:54:12.531361009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:12.532244 containerd[1503]: time="2025-05-12T12:54:12.532219998Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 12 12:54:12.533060 containerd[1503]: time="2025-05-12T12:54:12.533034386Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:12.536106 containerd[1503]: time="2025-05-12T12:54:12.536069212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:12.537113 containerd[1503]: time="2025-05-12T12:54:12.537086775Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.626145143s" May 12 12:54:12.537113 containerd[1503]: time="2025-05-12T12:54:12.537137117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 12 12:54:12.537713 containerd[1503]: time="2025-05-12T12:54:12.537693050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 12 12:54:13.716297 containerd[1503]: time="2025-05-12T12:54:13.716242979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:13.716680 containerd[1503]: time="2025-05-12T12:54:13.716652838Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 12 12:54:13.717560 containerd[1503]: time="2025-05-12T12:54:13.717533067Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:13.719768 containerd[1503]: time="2025-05-12T12:54:13.719717630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:13.721519 containerd[1503]: time="2025-05-12T12:54:13.721470946Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.183749478s" May 12 12:54:13.721519 containerd[1503]: time="2025-05-12T12:54:13.721503609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 12 12:54:13.721970 containerd[1503]: time="2025-05-12T12:54:13.721919206Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 12 12:54:14.775566 containerd[1503]: time="2025-05-12T12:54:14.775517131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:14.776294 containerd[1503]: time="2025-05-12T12:54:14.776231152Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 12 12:54:14.776826 containerd[1503]: time="2025-05-12T12:54:14.776802336Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:14.779672 containerd[1503]: time="2025-05-12T12:54:14.779642975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:14.780602 containerd[1503]: time="2025-05-12T12:54:14.780556589Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.058599146s" May 12 12:54:14.780602 containerd[1503]: time="2025-05-12T12:54:14.780593171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 12 12:54:14.781081 containerd[1503]: time="2025-05-12T12:54:14.781022562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 12 12:54:15.160004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 12:54:15.161537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:15.285971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:15.289609 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 12:54:15.327926 kubelet[2009]: E0512 12:54:15.327872 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 12:54:15.331201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 12:54:15.331472 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 12:54:15.331920 systemd[1]: kubelet.service: Consumed 138ms CPU time, 102.5M memory peak. May 12 12:54:15.831106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834897692.mount: Deactivated successfully. May 12 12:54:16.179534 containerd[1503]: time="2025-05-12T12:54:16.179423700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:16.180326 containerd[1503]: time="2025-05-12T12:54:16.180114007Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 12 12:54:16.181035 containerd[1503]: time="2025-05-12T12:54:16.181003698Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:16.184152 containerd[1503]: time="2025-05-12T12:54:16.184115591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:16.184784 containerd[1503]: time="2025-05-12T12:54:16.184616175Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.403560927s" May 12 12:54:16.184784 containerd[1503]: time="2025-05-12T12:54:16.184641108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 12 12:54:16.185168 containerd[1503]: time="2025-05-12T12:54:16.185152955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 12 12:54:16.766539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2510874613.mount: Deactivated successfully. May 12 12:54:17.426688 containerd[1503]: time="2025-05-12T12:54:17.426615625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:17.427677 containerd[1503]: time="2025-05-12T12:54:17.427434188Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 12 12:54:17.428918 containerd[1503]: time="2025-05-12T12:54:17.428887651Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:17.431877 containerd[1503]: time="2025-05-12T12:54:17.431830444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:17.433375 containerd[1503]: time="2025-05-12T12:54:17.433318933Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.248064765s" May 12 12:54:17.433375 containerd[1503]: time="2025-05-12T12:54:17.433352716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 12 12:54:17.433786 containerd[1503]: time="2025-05-12T12:54:17.433768649Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 12 12:54:17.834203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816129680.mount: Deactivated successfully. May 12 12:54:17.838308 containerd[1503]: time="2025-05-12T12:54:17.838261367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 12:54:17.839288 containerd[1503]: time="2025-05-12T12:54:17.839260625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 12 12:54:17.840215 containerd[1503]: time="2025-05-12T12:54:17.840181418Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 12:54:17.841860 containerd[1503]: time="2025-05-12T12:54:17.841816820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 12:54:17.842918 containerd[1503]: time="2025-05-12T12:54:17.842891699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 409.097161ms" May 12 12:54:17.843249 containerd[1503]: time="2025-05-12T12:54:17.843183722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 12 12:54:17.843861 containerd[1503]: time="2025-05-12T12:54:17.843804477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 12 12:54:18.327016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260980014.mount: Deactivated successfully. May 12 12:54:19.930455 containerd[1503]: time="2025-05-12T12:54:19.930398163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:19.931018 containerd[1503]: time="2025-05-12T12:54:19.930983156Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 12 12:54:19.931950 containerd[1503]: time="2025-05-12T12:54:19.931892732Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:19.934456 containerd[1503]: time="2025-05-12T12:54:19.934404110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:19.936392 containerd[1503]: time="2025-05-12T12:54:19.936364863Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.092536264s" May 12 12:54:19.936392 containerd[1503]: time="2025-05-12T12:54:19.936397830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 12 12:54:25.581829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 12:54:25.583521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:25.656415 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 12 12:54:25.656672 systemd[1]: kubelet.service: Failed with result 'signal'. May 12 12:54:25.657057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:25.660493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:25.677871 systemd[1]: Reload requested from client PID 2168 ('systemctl') (unit session-7.scope)... May 12 12:54:25.677887 systemd[1]: Reloading... May 12 12:54:25.757871 zram_generator::config[2218]: No configuration found. May 12 12:54:25.892282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 12:54:25.991870 systemd[1]: Reloading finished in 313 ms. May 12 12:54:26.050384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:26.053266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:26.053796 systemd[1]: kubelet.service: Deactivated successfully. May 12 12:54:26.054011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:26.054051 systemd[1]: kubelet.service: Consumed 86ms CPU time, 90.3M memory peak. May 12 12:54:26.055411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:26.182352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:26.186428 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 12:54:26.220898 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 12:54:26.220898 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 12 12:54:26.220898 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 12:54:26.221222 kubelet[2259]: I0512 12:54:26.220967 2259 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 12:54:27.776592 kubelet[2259]: I0512 12:54:27.776542 2259 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 12 12:54:27.776592 kubelet[2259]: I0512 12:54:27.776579 2259 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 12:54:27.776939 kubelet[2259]: I0512 12:54:27.776871 2259 server.go:954] "Client rotation is on, will bootstrap in background" May 12 12:54:27.811630 kubelet[2259]: E0512 12:54:27.811596 2259 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:27.812966 kubelet[2259]: I0512 12:54:27.812901 2259 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 12:54:27.818881 kubelet[2259]: I0512 12:54:27.818861 2259 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 12 12:54:27.821648 kubelet[2259]: I0512 12:54:27.821630 2259 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 12:54:27.822267 kubelet[2259]: I0512 12:54:27.822237 2259 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 12:54:27.822446 kubelet[2259]: I0512 12:54:27.822270 2259 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 12:54:27.822603 kubelet[2259]: I0512 12:54:27.822525 2259 topology_manager.go:138] "Creating topology manager with none policy" May 12 12:54:27.822603 kubelet[2259]: I0512 12:54:27.822534 2259 container_manager_linux.go:304] "Creating device plugin manager" May 12 12:54:27.822726 kubelet[2259]: I0512 12:54:27.822709 2259 state_mem.go:36] "Initialized new in-memory state store" May 12 12:54:27.826958 kubelet[2259]: I0512 12:54:27.826931 2259 kubelet.go:446] "Attempting to sync node with API server" May 12 12:54:27.826958 kubelet[2259]: I0512 12:54:27.826956 2259 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 12:54:27.827034 kubelet[2259]: I0512 12:54:27.826980 2259 kubelet.go:352] "Adding apiserver pod source" May 12 12:54:27.827163 kubelet[2259]: I0512 12:54:27.827089 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 12:54:27.827740 kubelet[2259]: W0512 12:54:27.827688 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 12 12:54:27.827867 kubelet[2259]: E0512 12:54:27.827832 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:27.828942 kubelet[2259]: W0512 12:54:27.828866 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 12 12:54:27.828942 kubelet[2259]: E0512 12:54:27.828910 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:27.830345 kubelet[2259]: I0512 12:54:27.830327 2259 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 12:54:27.831178 kubelet[2259]: I0512 12:54:27.831103 2259 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 12:54:27.831245 kubelet[2259]: W0512 12:54:27.831226 2259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 12:54:27.832193 kubelet[2259]: I0512 12:54:27.832175 2259 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 12 12:54:27.832263 kubelet[2259]: I0512 12:54:27.832226 2259 server.go:1287] "Started kubelet" May 12 12:54:27.832984 kubelet[2259]: I0512 12:54:27.832738 2259 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 12 12:54:27.836040 kubelet[2259]: I0512 12:54:27.836012 2259 server.go:490] "Adding debug handlers to kubelet server" May 12 12:54:27.836818 kubelet[2259]: I0512 12:54:27.836745 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 12:54:27.837180 kubelet[2259]: I0512 12:54:27.837148 2259 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 12:54:27.837615 kubelet[2259]: I0512 12:54:27.837594 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 12:54:27.837963 kubelet[2259]: I0512 12:54:27.837936 2259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 12:54:27.838396 kubelet[2259]: E0512 12:54:27.838104 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ec8c83e6596be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 12:54:27.832190654 +0000 UTC m=+1.642482907,LastTimestamp:2025-05-12 12:54:27.832190654 +0000 UTC m=+1.642482907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 12:54:27.839463 kubelet[2259]: E0512 12:54:27.838854 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 12:54:27.839463 kubelet[2259]: I0512 12:54:27.838901 2259 volume_manager.go:297] "Starting Kubelet Volume Manager" May 12 12:54:27.839463 kubelet[2259]: I0512 12:54:27.839088 2259 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 12:54:27.839463 kubelet[2259]: I0512 12:54:27.839133 2259 reconciler.go:26] "Reconciler: start to sync state" May 12 12:54:27.839463 kubelet[2259]: W0512 12:54:27.839418 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 12 12:54:27.839463 kubelet[2259]: E0512 12:54:27.839456 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:27.840952 kubelet[2259]: E0512 12:54:27.840912 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" May 12 12:54:27.841531 kubelet[2259]: I0512 12:54:27.841509 2259 factory.go:221] Registration of the containerd container factory successfully May 12 12:54:27.841531 kubelet[2259]: I0512 12:54:27.841526 2259 factory.go:221] Registration of the systemd container factory successfully May 12 12:54:27.841645 kubelet[2259]: I0512 12:54:27.841589 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 12:54:27.841716 kubelet[2259]: E0512 12:54:27.841521 2259 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 12:54:27.853397 kubelet[2259]: I0512 12:54:27.853369 2259 cpu_manager.go:221] "Starting CPU manager" policy="none" May 12 12:54:27.853457 kubelet[2259]: I0512 12:54:27.853391 2259 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 12 12:54:27.853478 kubelet[2259]: I0512 12:54:27.853456 2259 state_mem.go:36] "Initialized new in-memory state store" May 12 12:54:27.854160 kubelet[2259]: I0512 12:54:27.854011 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 12:54:27.855767 kubelet[2259]: I0512 12:54:27.855282 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 12:54:27.855767 kubelet[2259]: I0512 12:54:27.855313 2259 status_manager.go:227] "Starting to sync pod status with apiserver" May 12 12:54:27.855767 kubelet[2259]: I0512 12:54:27.855330 2259 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 12 12:54:27.855767 kubelet[2259]: I0512 12:54:27.855337 2259 kubelet.go:2388] "Starting kubelet main sync loop" May 12 12:54:27.855767 kubelet[2259]: E0512 12:54:27.855374 2259 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 12:54:27.856047 kubelet[2259]: W0512 12:54:27.856001 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 12 12:54:27.856089 kubelet[2259]: E0512 12:54:27.856057 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:27.924623 kubelet[2259]: I0512 12:54:27.924577 2259 policy_none.go:49] "None policy: Start" May 12 12:54:27.924746 kubelet[2259]: I0512 12:54:27.924638 2259 memory_manager.go:186] "Starting memorymanager" policy="None" May 12 12:54:27.924746 kubelet[2259]: I0512 12:54:27.924654 2259 state_mem.go:35] "Initializing new in-memory state store" May 12 12:54:27.930632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 12:54:27.939573 kubelet[2259]: E0512 12:54:27.939535 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 12:54:27.948021 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 12:54:27.951059 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 12:54:27.955753 kubelet[2259]: E0512 12:54:27.955718 2259 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 12 12:54:27.959864 kubelet[2259]: I0512 12:54:27.959616 2259 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 12:54:27.959864 kubelet[2259]: I0512 12:54:27.959828 2259 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 12:54:27.959985 kubelet[2259]: I0512 12:54:27.959951 2259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 12:54:27.960250 kubelet[2259]: I0512 12:54:27.960221 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 12:54:27.961213 kubelet[2259]: E0512 12:54:27.961192 2259 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 12 12:54:27.961284 kubelet[2259]: E0512 12:54:27.961238 2259 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 12:54:28.041617 kubelet[2259]: E0512 12:54:28.041500 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" May 12 12:54:28.061935 kubelet[2259]: I0512 12:54:28.061889 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 12:54:28.062337 kubelet[2259]: E0512 12:54:28.062311 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 12 12:54:28.165415 systemd[1]: Created slice kubepods-burstable-poda8e0bd089d2186a0640a50b9c05a6a41.slice - libcontainer container kubepods-burstable-poda8e0bd089d2186a0640a50b9c05a6a41.slice. May 12 12:54:28.189219 kubelet[2259]: E0512 12:54:28.189191 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:28.193219 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 12 12:54:28.195346 kubelet[2259]: E0512 12:54:28.195313 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:28.196973 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 12 12:54:28.199287 kubelet[2259]: E0512 12:54:28.199264 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:28.241094 kubelet[2259]: I0512 12:54:28.241064 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 12 12:54:28.241364 kubelet[2259]: I0512 12:54:28.241245 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:28.241364 kubelet[2259]: I0512 12:54:28.241272 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:28.241364 kubelet[2259]: I0512 12:54:28.241290 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:28.241364 kubelet[2259]: I0512 12:54:28.241306 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:28.241364 kubelet[2259]: I0512 12:54:28.241321 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:28.241494 kubelet[2259]: I0512 12:54:28.241347 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:28.241494 kubelet[2259]: I0512 12:54:28.241403 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:28.241494 kubelet[2259]: I0512 12:54:28.241433 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:28.264235 kubelet[2259]: I0512 12:54:28.264188 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 12:54:28.264546 kubelet[2259]: E0512 12:54:28.264519 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 12 12:54:28.282009 kubelet[2259]: E0512 12:54:28.281912 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ec8c83e6596be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 12:54:27.832190654 +0000 UTC m=+1.642482907,LastTimestamp:2025-05-12 12:54:27.832190654 +0000 UTC m=+1.642482907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 12:54:28.442429 kubelet[2259]: E0512 12:54:28.442314 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" May 12 12:54:28.490307 containerd[1503]: time="2025-05-12T12:54:28.490267580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e0bd089d2186a0640a50b9c05a6a41,Namespace:kube-system,Attempt:0,}" May 12 12:54:28.496849 containerd[1503]: time="2025-05-12T12:54:28.496800181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 12 12:54:28.500350 containerd[1503]: time="2025-05-12T12:54:28.500301682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 12 12:54:28.508382 containerd[1503]: time="2025-05-12T12:54:28.508332405Z" level=info msg="connecting to shim 1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11" address="unix:///run/containerd/s/51a19d3023ba821d0c200db8908cf8009829f1e68a2ef09eab7fcecf006c8f53" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:28.527431 containerd[1503]: time="2025-05-12T12:54:28.527288212Z" level=info msg="connecting to shim ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091" address="unix:///run/containerd/s/35473ec91bfcf4b7f23a818fae762f7b694fa186bba32b3d2df8925fdd941e31" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:28.528029 containerd[1503]: time="2025-05-12T12:54:28.527998517Z" level=info msg="connecting to shim 99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765" address="unix:///run/containerd/s/a2893ba6deed119f3860672a70350fe40f52420d96e10f66cb64fb44b0d4ff7f" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:28.542021 systemd[1]: Started cri-containerd-1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11.scope - libcontainer container 1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11. May 12 12:54:28.561988 systemd[1]: Started cri-containerd-99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765.scope - libcontainer container 99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765. May 12 12:54:28.563078 systemd[1]: Started cri-containerd-ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091.scope - libcontainer container ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091. May 12 12:54:28.599396 containerd[1503]: time="2025-05-12T12:54:28.599348948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091\"" May 12 12:54:28.603653 containerd[1503]: time="2025-05-12T12:54:28.603474957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e0bd089d2186a0640a50b9c05a6a41,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11\"" May 12 12:54:28.604353 containerd[1503]: time="2025-05-12T12:54:28.603807940Z" level=info msg="CreateContainer within sandbox \"ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 12:54:28.608555 containerd[1503]: time="2025-05-12T12:54:28.608461335Z" level=info msg="CreateContainer within sandbox \"1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 12:54:28.611062 containerd[1503]: time="2025-05-12T12:54:28.611038440Z" level=info msg="Container 74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:28.611574 containerd[1503]: time="2025-05-12T12:54:28.611527330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765\"" May 12 12:54:28.613612 containerd[1503]: time="2025-05-12T12:54:28.613572807Z" level=info msg="CreateContainer within sandbox \"99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 12:54:28.618286 containerd[1503]: time="2025-05-12T12:54:28.618252093Z" level=info msg="Container 8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:28.618286 containerd[1503]: time="2025-05-12T12:54:28.618276303Z" level=info msg="CreateContainer within sandbox \"ccd21ffd5c8ad6777abec2a8c4394c5cdf9ab6931e58393fa08d5f3a89ea0091\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a\"" May 12 12:54:28.618872 containerd[1503]: time="2025-05-12T12:54:28.618827900Z" level=info msg="StartContainer for \"74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a\"" May 12 12:54:28.619939 containerd[1503]: time="2025-05-12T12:54:28.619910004Z" level=info msg="connecting to shim 74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a" address="unix:///run/containerd/s/35473ec91bfcf4b7f23a818fae762f7b694fa186bba32b3d2df8925fdd941e31" protocol=ttrpc version=3 May 12 12:54:28.625027 containerd[1503]: time="2025-05-12T12:54:28.624895781Z" level=info msg="Container 8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:28.627609 containerd[1503]: time="2025-05-12T12:54:28.627572049Z" level=info msg="CreateContainer within sandbox \"1c092ee97c555fe7cb16f9d314f5dee9a70c01ef56f0b669e8128b95b3bdeb11\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41\"" May 12 12:54:28.628540 containerd[1503]: time="2025-05-12T12:54:28.628511612Z" level=info msg="StartContainer for \"8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41\"" May 12 12:54:28.630660 containerd[1503]: time="2025-05-12T12:54:28.630626798Z" level=info msg="CreateContainer within sandbox \"99177317456412a3f2d24f6601a5e5052f8342f0c6e7db5283f575fc31360765\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d\"" May 12 12:54:28.631940 containerd[1503]: time="2025-05-12T12:54:28.631428742Z" level=info msg="connecting to shim 8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41" address="unix:///run/containerd/s/51a19d3023ba821d0c200db8908cf8009829f1e68a2ef09eab7fcecf006c8f53" protocol=ttrpc version=3 May 12 12:54:28.631940 containerd[1503]: time="2025-05-12T12:54:28.631492490Z" level=info msg="StartContainer for \"8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d\"" May 12 12:54:28.634030 containerd[1503]: time="2025-05-12T12:54:28.633993722Z" level=info msg="connecting to shim 8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d" address="unix:///run/containerd/s/a2893ba6deed119f3860672a70350fe40f52420d96e10f66cb64fb44b0d4ff7f" protocol=ttrpc version=3 May 12 12:54:28.638035 systemd[1]: Started cri-containerd-74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a.scope - libcontainer container 74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a. May 12 12:54:28.661092 systemd[1]: Started cri-containerd-8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41.scope - libcontainer container 8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41. May 12 12:54:28.665563 systemd[1]: Started cri-containerd-8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d.scope - libcontainer container 8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d. May 12 12:54:28.666639 kubelet[2259]: I0512 12:54:28.666593 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 12:54:28.667974 kubelet[2259]: E0512 12:54:28.667933 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 12 12:54:28.700076 containerd[1503]: time="2025-05-12T12:54:28.699230732Z" level=info msg="StartContainer for \"74463a9f2901f15f2efe6084d532ba21d3fa47914a9ad7189dbaf1bdb6ed7d5a\" returns successfully" May 12 12:54:28.723607 containerd[1503]: time="2025-05-12T12:54:28.721851671Z" level=info msg="StartContainer for \"8cf46f74876b1c62ae294a654d0f27857d66004cdd18379a6655aaf5571b8c41\" returns successfully" May 12 12:54:28.731043 containerd[1503]: time="2025-05-12T12:54:28.731006116Z" level=info msg="StartContainer for \"8ca54bbb9bf90b04549a3136bc19555e0b1ed05d05fd777494c8010041475d4d\" returns successfully" May 12 12:54:28.759611 kubelet[2259]: W0512 12:54:28.759169 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 12 12:54:28.759611 kubelet[2259]: E0512 12:54:28.759237 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 12 12:54:28.861524 kubelet[2259]: E0512 12:54:28.861438 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:28.863551 kubelet[2259]: E0512 12:54:28.863149 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:28.865809 kubelet[2259]: E0512 12:54:28.865679 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:29.469432 kubelet[2259]: I0512 12:54:29.469398 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 12:54:29.867428 kubelet[2259]: E0512 12:54:29.867185 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:29.867428 kubelet[2259]: E0512 12:54:29.867294 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 12:54:30.121826 kubelet[2259]: E0512 12:54:30.121679 2259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 12:54:30.204252 kubelet[2259]: I0512 12:54:30.204215 2259 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 12 12:54:30.240949 kubelet[2259]: I0512 12:54:30.240909 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 12:54:30.253799 kubelet[2259]: E0512 12:54:30.253755 2259 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 12 12:54:30.253799 kubelet[2259]: I0512 12:54:30.253795 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 12 12:54:30.258484 kubelet[2259]: E0512 12:54:30.258444 2259 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 12 12:54:30.258484 kubelet[2259]: I0512 12:54:30.258482 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 12:54:30.260354 kubelet[2259]: E0512 12:54:30.260325 2259 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 12 12:54:30.736675 kubelet[2259]: I0512 12:54:30.736630 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 12:54:30.738615 kubelet[2259]: E0512 12:54:30.738579 2259 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 12 12:54:30.829715 kubelet[2259]: I0512 12:54:30.829684 2259 apiserver.go:52] "Watching apiserver" May 12 12:54:30.839425 kubelet[2259]: I0512 12:54:30.839389 2259 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 12:54:32.151382 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... May 12 12:54:32.151400 systemd[1]: Reloading... May 12 12:54:32.233882 zram_generator::config[2578]: No configuration found. May 12 12:54:32.338860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 12:54:32.456056 systemd[1]: Reloading finished in 304 ms. May 12 12:54:32.481386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:32.492968 systemd[1]: kubelet.service: Deactivated successfully. May 12 12:54:32.493244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:32.493296 systemd[1]: kubelet.service: Consumed 2.064s CPU time, 123.1M memory peak. May 12 12:54:32.496028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 12:54:32.619454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 12:54:32.623180 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 12:54:32.667178 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 12:54:32.667178 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 12 12:54:32.667178 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 12:54:32.667521 kubelet[2619]: I0512 12:54:32.667237 2619 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 12:54:32.675117 kubelet[2619]: I0512 12:54:32.675072 2619 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 12 12:54:32.675117 kubelet[2619]: I0512 12:54:32.675107 2619 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 12:54:32.675382 kubelet[2619]: I0512 12:54:32.675356 2619 server.go:954] "Client rotation is on, will bootstrap in background" May 12 12:54:32.676588 kubelet[2619]: I0512 12:54:32.676566 2619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 12:54:32.678824 kubelet[2619]: I0512 12:54:32.678715 2619 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 12:54:32.682045 kubelet[2619]: I0512 12:54:32.682015 2619 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 12 12:54:32.686036 kubelet[2619]: I0512 12:54:32.685937 2619 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 12:54:32.686414 kubelet[2619]: I0512 12:54:32.686361 2619 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 12:54:32.686565 kubelet[2619]: I0512 12:54:32.686407 2619 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 12:54:32.686643 kubelet[2619]: I0512 12:54:32.686579 2619 topology_manager.go:138] "Creating topology manager with none policy" May 12 12:54:32.686643 kubelet[2619]: I0512 12:54:32.686588 2619 container_manager_linux.go:304] "Creating device plugin manager" May 12 12:54:32.686643 kubelet[2619]: I0512 12:54:32.686632 2619 state_mem.go:36] "Initialized new in-memory state store" May 12 12:54:32.686784 kubelet[2619]: I0512 12:54:32.686773 2619 kubelet.go:446] "Attempting to sync node with API server" May 12 12:54:32.686810 kubelet[2619]: I0512 12:54:32.686787 2619 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 12:54:32.686810 kubelet[2619]: I0512 12:54:32.686810 2619 kubelet.go:352] "Adding apiserver pod source" May 12 12:54:32.686908 kubelet[2619]: I0512 12:54:32.686827 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 12:54:32.688107 kubelet[2619]: I0512 12:54:32.688084 2619 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 12:54:32.688636 kubelet[2619]: I0512 12:54:32.688615 2619 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 12:54:32.689054 kubelet[2619]: I0512 12:54:32.689035 2619 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 12 12:54:32.689102 kubelet[2619]: I0512 12:54:32.689065 2619 server.go:1287] "Started kubelet" May 12 12:54:32.690547 kubelet[2619]: I0512 12:54:32.690467 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 12:54:32.690853 kubelet[2619]: I0512 12:54:32.690813 2619 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 12:54:32.690933 kubelet[2619]: I0512 12:54:32.690897 2619 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 12 12:54:32.691502 kubelet[2619]: E0512 12:54:32.691480 2619 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 12:54:32.691699 kubelet[2619]: I0512 12:54:32.691684 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 12:54:32.692057 kubelet[2619]: I0512 12:54:32.692030 2619 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 12:54:32.692742 kubelet[2619]: E0512 12:54:32.692683 2619 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 12:54:32.692742 kubelet[2619]: I0512 12:54:32.692718 2619 volume_manager.go:297] "Starting Kubelet Volume Manager" May 12 12:54:32.693859 kubelet[2619]: I0512 12:54:32.692914 2619 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 12:54:32.693859 kubelet[2619]: I0512 12:54:32.693030 2619 reconciler.go:26] "Reconciler: start to sync state" May 12 12:54:32.699866 kubelet[2619]: I0512 12:54:32.698493 2619 server.go:490] "Adding debug handlers to kubelet server" May 12 12:54:32.704317 kubelet[2619]: I0512 12:54:32.704194 2619 factory.go:221] Registration of the systemd container factory successfully May 12 12:54:32.704518 kubelet[2619]: I0512 12:54:32.704502 2619 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 12:54:32.706512 kubelet[2619]: I0512 12:54:32.706431 2619 factory.go:221] Registration of the containerd container factory successfully May 12 12:54:32.718943 kubelet[2619]: I0512 12:54:32.718868 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 12:54:32.719859 kubelet[2619]: I0512 12:54:32.719810 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 12:54:32.719859 kubelet[2619]: I0512 12:54:32.719834 2619 status_manager.go:227] "Starting to sync pod status with apiserver" May 12 12:54:32.719922 kubelet[2619]: I0512 12:54:32.719874 2619 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 12 12:54:32.719922 kubelet[2619]: I0512 12:54:32.719882 2619 kubelet.go:2388] "Starting kubelet main sync loop" May 12 12:54:32.719972 kubelet[2619]: E0512 12:54:32.719921 2619 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 12:54:32.746068 kubelet[2619]: I0512 12:54:32.746044 2619 cpu_manager.go:221] "Starting CPU manager" policy="none" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746224 2619 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746248 2619 state_mem.go:36] "Initialized new in-memory state store" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746388 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746400 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746423 2619 policy_none.go:49] "None policy: Start" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746432 2619 memory_manager.go:186] "Starting memorymanager" policy="None" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746440 2619 state_mem.go:35] "Initializing new in-memory state store" May 12 12:54:32.746834 kubelet[2619]: I0512 12:54:32.746536 2619 state_mem.go:75] "Updated machine memory state" May 12 12:54:32.750340 kubelet[2619]: I0512 12:54:32.750319 2619 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 12:54:32.750520 kubelet[2619]: I0512 12:54:32.750468 2619 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 12:54:32.750520 kubelet[2619]: I0512 12:54:32.750479 2619 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 12:54:32.750758 kubelet[2619]: I0512 12:54:32.750732 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 12:54:32.752769 kubelet[2619]: E0512 12:54:32.752356 2619 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 12 12:54:32.821285 kubelet[2619]: I0512 12:54:32.821252 2619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 12 12:54:32.821642 kubelet[2619]: I0512 12:54:32.821387 2619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 12:54:32.821824 kubelet[2619]: I0512 12:54:32.821540 2619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.852269 kubelet[2619]: I0512 12:54:32.852235 2619 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 12:54:32.858334 kubelet[2619]: I0512 12:54:32.858304 2619 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 12 12:54:32.858416 kubelet[2619]: I0512 12:54:32.858384 2619 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 12 12:54:32.894501 kubelet[2619]: I0512 12:54:32.894455 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.894584 kubelet[2619]: I0512 12:54:32.894527 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.894584 kubelet[2619]: I0512 12:54:32.894568 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 12 12:54:32.894633 kubelet[2619]: I0512 12:54:32.894596 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:32.894633 kubelet[2619]: I0512 12:54:32.894624 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:32.894672 kubelet[2619]: I0512 12:54:32.894650 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.894696 kubelet[2619]: I0512 12:54:32.894675 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.894696 kubelet[2619]: I0512 12:54:32.894689 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 12:54:32.894740 kubelet[2619]: I0512 12:54:32.894702 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e0bd089d2186a0640a50b9c05a6a41-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e0bd089d2186a0640a50b9c05a6a41\") " pod="kube-system/kube-apiserver-localhost" May 12 12:54:33.688191 kubelet[2619]: I0512 12:54:33.688052 2619 apiserver.go:52] "Watching apiserver" May 12 12:54:33.693480 kubelet[2619]: I0512 12:54:33.693444 2619 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 12:54:33.735316 kubelet[2619]: I0512 12:54:33.735064 2619 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 12 12:54:33.739010 kubelet[2619]: E0512 12:54:33.738972 2619 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 12 12:54:33.789588 kubelet[2619]: I0512 12:54:33.789517 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.789498668 podStartE2EDuration="1.789498668s" podCreationTimestamp="2025-05-12 12:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:54:33.770373861 +0000 UTC m=+1.143698440" watchObservedRunningTime="2025-05-12 12:54:33.789498668 +0000 UTC m=+1.162823207" May 12 12:54:33.799866 kubelet[2619]: I0512 12:54:33.799551 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.799533195 podStartE2EDuration="1.799533195s" podCreationTimestamp="2025-05-12 12:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:54:33.790031985 +0000 UTC m=+1.163356484" watchObservedRunningTime="2025-05-12 12:54:33.799533195 +0000 UTC m=+1.172857734" May 12 12:54:33.815018 kubelet[2619]: I0512 12:54:33.814967 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.814945585 podStartE2EDuration="1.814945585s" podCreationTimestamp="2025-05-12 12:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:54:33.80155728 +0000 UTC m=+1.174881899" watchObservedRunningTime="2025-05-12 12:54:33.814945585 +0000 UTC m=+1.188270084" May 12 12:54:36.860126 kubelet[2619]: I0512 12:54:36.860094 2619 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 12:54:36.860725 containerd[1503]: time="2025-05-12T12:54:36.860625509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 12:54:36.861079 kubelet[2619]: I0512 12:54:36.860790 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 12:54:37.244936 sudo[1709]: pam_unix(sudo:session): session closed for user root May 12 12:54:37.247395 sshd[1708]: Connection closed by 10.0.0.1 port 57132 May 12 12:54:37.247782 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 12 12:54:37.251168 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:57132.service: Deactivated successfully. May 12 12:54:37.252822 systemd[1]: session-7.scope: Deactivated successfully. May 12 12:54:37.254105 systemd[1]: session-7.scope: Consumed 7.619s CPU time, 226.5M memory peak. May 12 12:54:37.255707 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. May 12 12:54:37.257893 systemd-logind[1493]: Removed session 7. May 12 12:54:37.747412 systemd[1]: Created slice kubepods-besteffort-pod58b41a14_933b_41a3_9384_a56535b4c39f.slice - libcontainer container kubepods-besteffort-pod58b41a14_933b_41a3_9384_a56535b4c39f.slice. May 12 12:54:37.825337 kubelet[2619]: I0512 12:54:37.825288 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58b41a14-933b-41a3-9384-a56535b4c39f-kube-proxy\") pod \"kube-proxy-8pgkt\" (UID: \"58b41a14-933b-41a3-9384-a56535b4c39f\") " pod="kube-system/kube-proxy-8pgkt" May 12 12:54:37.825337 kubelet[2619]: I0512 12:54:37.825336 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8l8\" (UniqueName: \"kubernetes.io/projected/58b41a14-933b-41a3-9384-a56535b4c39f-kube-api-access-4m8l8\") pod \"kube-proxy-8pgkt\" (UID: \"58b41a14-933b-41a3-9384-a56535b4c39f\") " pod="kube-system/kube-proxy-8pgkt" May 12 12:54:37.825465 kubelet[2619]: I0512 12:54:37.825355 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b41a14-933b-41a3-9384-a56535b4c39f-lib-modules\") pod \"kube-proxy-8pgkt\" (UID: \"58b41a14-933b-41a3-9384-a56535b4c39f\") " pod="kube-system/kube-proxy-8pgkt" May 12 12:54:37.825465 kubelet[2619]: I0512 12:54:37.825372 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b41a14-933b-41a3-9384-a56535b4c39f-xtables-lock\") pod \"kube-proxy-8pgkt\" (UID: \"58b41a14-933b-41a3-9384-a56535b4c39f\") " pod="kube-system/kube-proxy-8pgkt" May 12 12:54:37.900192 kubelet[2619]: I0512 12:54:37.900145 2619 status_manager.go:890] "Failed to get status for pod" podUID="061350a9-1446-4bc7-880e-6c357fbf20ff" pod="tigera-operator/tigera-operator-789496d6f5-khtb7" err="pods \"tigera-operator-789496d6f5-khtb7\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" May 12 12:54:37.905750 systemd[1]: Created slice kubepods-besteffort-pod061350a9_1446_4bc7_880e_6c357fbf20ff.slice - libcontainer container kubepods-besteffort-pod061350a9_1446_4bc7_880e_6c357fbf20ff.slice. May 12 12:54:37.925778 kubelet[2619]: I0512 12:54:37.925741 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggj9v\" (UniqueName: \"kubernetes.io/projected/061350a9-1446-4bc7-880e-6c357fbf20ff-kube-api-access-ggj9v\") pod \"tigera-operator-789496d6f5-khtb7\" (UID: \"061350a9-1446-4bc7-880e-6c357fbf20ff\") " pod="tigera-operator/tigera-operator-789496d6f5-khtb7" May 12 12:54:37.926079 kubelet[2619]: I0512 12:54:37.925918 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/061350a9-1446-4bc7-880e-6c357fbf20ff-var-lib-calico\") pod \"tigera-operator-789496d6f5-khtb7\" (UID: \"061350a9-1446-4bc7-880e-6c357fbf20ff\") " pod="tigera-operator/tigera-operator-789496d6f5-khtb7" May 12 12:54:38.067856 containerd[1503]: time="2025-05-12T12:54:38.067755280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pgkt,Uid:58b41a14-933b-41a3-9384-a56535b4c39f,Namespace:kube-system,Attempt:0,}" May 12 12:54:38.084655 containerd[1503]: time="2025-05-12T12:54:38.084623805Z" level=info msg="connecting to shim a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8" address="unix:///run/containerd/s/e5fc293c8d3c7364ee151a3ab7f68d8523d185793ee61de94da763f723915f3b" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:38.112007 systemd[1]: Started cri-containerd-a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8.scope - libcontainer container a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8. May 12 12:54:38.131953 containerd[1503]: time="2025-05-12T12:54:38.131906201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pgkt,Uid:58b41a14-933b-41a3-9384-a56535b4c39f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8\"" May 12 12:54:38.146354 containerd[1503]: time="2025-05-12T12:54:38.146317926Z" level=info msg="CreateContainer within sandbox \"a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 12:54:38.154259 containerd[1503]: time="2025-05-12T12:54:38.153754735Z" level=info msg="Container 2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:38.161495 containerd[1503]: time="2025-05-12T12:54:38.161462695Z" level=info msg="CreateContainer within sandbox \"a5587e9a87e6c7c3770ea69689efa0c8c4a18f2dd0cc9bb18bc1bd8311cd9ae8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723\"" May 12 12:54:38.162878 containerd[1503]: time="2025-05-12T12:54:38.162196739Z" level=info msg="StartContainer for \"2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723\"" May 12 12:54:38.164627 containerd[1503]: time="2025-05-12T12:54:38.164545567Z" level=info msg="connecting to shim 2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723" address="unix:///run/containerd/s/e5fc293c8d3c7364ee151a3ab7f68d8523d185793ee61de94da763f723915f3b" protocol=ttrpc version=3 May 12 12:54:38.184010 systemd[1]: Started cri-containerd-2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723.scope - libcontainer container 2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723. May 12 12:54:38.209311 containerd[1503]: time="2025-05-12T12:54:38.209277392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-khtb7,Uid:061350a9-1446-4bc7-880e-6c357fbf20ff,Namespace:tigera-operator,Attempt:0,}" May 12 12:54:38.219308 containerd[1503]: time="2025-05-12T12:54:38.219280134Z" level=info msg="StartContainer for \"2a44df70fda97a3e9aa88faf22a809c1bdd9bbbe0312978f7071d3f57ac60723\" returns successfully" May 12 12:54:38.232380 containerd[1503]: time="2025-05-12T12:54:38.232345985Z" level=info msg="connecting to shim f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227" address="unix:///run/containerd/s/5fb28521fe4e6ec00197a98355cd35f6af52399c6d167356ea3ea06d00de5910" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:38.255999 systemd[1]: Started cri-containerd-f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227.scope - libcontainer container f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227. May 12 12:54:38.288329 containerd[1503]: time="2025-05-12T12:54:38.288290130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-khtb7,Uid:061350a9-1446-4bc7-880e-6c357fbf20ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227\"" May 12 12:54:38.291291 containerd[1503]: time="2025-05-12T12:54:38.290220590Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 12 12:54:38.763059 kubelet[2619]: I0512 12:54:38.762995 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pgkt" podStartSLOduration=1.762935742 podStartE2EDuration="1.762935742s" podCreationTimestamp="2025-05-12 12:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:54:38.754963912 +0000 UTC m=+6.128288451" watchObservedRunningTime="2025-05-12 12:54:38.762935742 +0000 UTC m=+6.136260281" May 12 12:54:38.939296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230321075.mount: Deactivated successfully. May 12 12:54:40.033039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867144212.mount: Deactivated successfully. May 12 12:54:41.132675 containerd[1503]: time="2025-05-12T12:54:41.132621319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:41.133194 containerd[1503]: time="2025-05-12T12:54:41.133144889Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 12 12:54:41.133866 containerd[1503]: time="2025-05-12T12:54:41.133827395Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:41.135591 containerd[1503]: time="2025-05-12T12:54:41.135521119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:41.136382 containerd[1503]: time="2025-05-12T12:54:41.136340038Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.845037604s" May 12 12:54:41.136382 containerd[1503]: time="2025-05-12T12:54:41.136377802Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 12 12:54:41.146602 containerd[1503]: time="2025-05-12T12:54:41.146278559Z" level=info msg="CreateContainer within sandbox \"f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 12 12:54:41.151607 containerd[1503]: time="2025-05-12T12:54:41.151468421Z" level=info msg="Container e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:41.154868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231190544.mount: Deactivated successfully. May 12 12:54:41.158427 containerd[1503]: time="2025-05-12T12:54:41.158337765Z" level=info msg="CreateContainer within sandbox \"f159acfbd07299bcdbef1d959521bb3b73dec59d3bf7b76a36965feedabc8227\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f\"" May 12 12:54:41.158806 containerd[1503]: time="2025-05-12T12:54:41.158770527Z" level=info msg="StartContainer for \"e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f\"" May 12 12:54:41.159633 containerd[1503]: time="2025-05-12T12:54:41.159607888Z" level=info msg="connecting to shim e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f" address="unix:///run/containerd/s/5fb28521fe4e6ec00197a98355cd35f6af52399c6d167356ea3ea06d00de5910" protocol=ttrpc version=3 May 12 12:54:41.205060 systemd[1]: Started cri-containerd-e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f.scope - libcontainer container e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f. May 12 12:54:41.232262 containerd[1503]: time="2025-05-12T12:54:41.232211508Z" level=info msg="StartContainer for \"e8ab91ea0c29871bb2060f9a1c4df975785b487e75cd8ab55987608c4cf3612f\" returns successfully" May 12 12:54:41.758990 kubelet[2619]: I0512 12:54:41.758917 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-khtb7" podStartSLOduration=1.903186805 podStartE2EDuration="4.758873031s" podCreationTimestamp="2025-05-12 12:54:37 +0000 UTC" firstStartedPulling="2025-05-12 12:54:38.289479946 +0000 UTC m=+5.662804445" lastFinishedPulling="2025-05-12 12:54:41.145166132 +0000 UTC m=+8.518490671" observedRunningTime="2025-05-12 12:54:41.758649329 +0000 UTC m=+9.131973908" watchObservedRunningTime="2025-05-12 12:54:41.758873031 +0000 UTC m=+9.132197570" May 12 12:54:45.340332 systemd[1]: Created slice kubepods-besteffort-pod83820671_4605_4178_85c5_0a13d565d2fa.slice - libcontainer container kubepods-besteffort-pod83820671_4605_4178_85c5_0a13d565d2fa.slice. May 12 12:54:45.378934 kubelet[2619]: I0512 12:54:45.378898 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83820671-4605-4178-85c5-0a13d565d2fa-typha-certs\") pod \"calico-typha-79d56d9996-zl87x\" (UID: \"83820671-4605-4178-85c5-0a13d565d2fa\") " pod="calico-system/calico-typha-79d56d9996-zl87x" May 12 12:54:45.379367 kubelet[2619]: I0512 12:54:45.379292 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5w2\" (UniqueName: \"kubernetes.io/projected/83820671-4605-4178-85c5-0a13d565d2fa-kube-api-access-sg5w2\") pod \"calico-typha-79d56d9996-zl87x\" (UID: \"83820671-4605-4178-85c5-0a13d565d2fa\") " pod="calico-system/calico-typha-79d56d9996-zl87x" May 12 12:54:45.379367 kubelet[2619]: I0512 12:54:45.379328 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83820671-4605-4178-85c5-0a13d565d2fa-tigera-ca-bundle\") pod \"calico-typha-79d56d9996-zl87x\" (UID: \"83820671-4605-4178-85c5-0a13d565d2fa\") " pod="calico-system/calico-typha-79d56d9996-zl87x" May 12 12:54:45.506428 systemd[1]: Created slice kubepods-besteffort-podf63aac50_b845_48c2_a535_916d18cefe45.slice - libcontainer container kubepods-besteffort-podf63aac50_b845_48c2_a535_916d18cefe45.slice. May 12 12:54:45.580445 kubelet[2619]: I0512 12:54:45.580380 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-xtables-lock\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580445 kubelet[2619]: I0512 12:54:45.580417 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f63aac50-b845-48c2-a535-916d18cefe45-node-certs\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580445 kubelet[2619]: I0512 12:54:45.580447 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-cni-log-dir\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580874 kubelet[2619]: I0512 12:54:45.580462 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-flexvol-driver-host\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580874 kubelet[2619]: I0512 12:54:45.580480 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99w6c\" (UniqueName: \"kubernetes.io/projected/f63aac50-b845-48c2-a535-916d18cefe45-kube-api-access-99w6c\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580874 kubelet[2619]: I0512 12:54:45.580520 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-policysync\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580874 kubelet[2619]: I0512 12:54:45.580551 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f63aac50-b845-48c2-a535-916d18cefe45-tigera-ca-bundle\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580874 kubelet[2619]: I0512 12:54:45.580567 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-cni-bin-dir\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580983 kubelet[2619]: I0512 12:54:45.580586 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-var-lib-calico\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580983 kubelet[2619]: I0512 12:54:45.580604 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-lib-modules\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580983 kubelet[2619]: I0512 12:54:45.580618 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-var-run-calico\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.580983 kubelet[2619]: I0512 12:54:45.580631 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f63aac50-b845-48c2-a535-916d18cefe45-cni-net-dir\") pod \"calico-node-xrd5n\" (UID: \"f63aac50-b845-48c2-a535-916d18cefe45\") " pod="calico-system/calico-node-xrd5n" May 12 12:54:45.648346 containerd[1503]: time="2025-05-12T12:54:45.648145108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d56d9996-zl87x,Uid:83820671-4605-4178-85c5-0a13d565d2fa,Namespace:calico-system,Attempt:0,}" May 12 12:54:45.695650 kubelet[2619]: E0512 12:54:45.695601 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:45.697037 kubelet[2619]: E0512 12:54:45.697003 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.697118 kubelet[2619]: W0512 12:54:45.697076 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.709362 kubelet[2619]: E0512 12:54:45.709193 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.709362 kubelet[2619]: W0512 12:54:45.709217 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.709362 kubelet[2619]: E0512 12:54:45.709242 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.710805 kubelet[2619]: E0512 12:54:45.710772 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.728875 containerd[1503]: time="2025-05-12T12:54:45.728797888Z" level=info msg="connecting to shim d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff" address="unix:///run/containerd/s/696002c6b2cabb6b128fffa82f1a4b00547acec582f701f68361daf24a01b90e" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:45.756995 systemd[1]: Started cri-containerd-d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff.scope - libcontainer container d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff. May 12 12:54:45.771248 kubelet[2619]: E0512 12:54:45.770919 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.771248 kubelet[2619]: W0512 12:54:45.770940 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.771248 kubelet[2619]: E0512 12:54:45.770960 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.771634 kubelet[2619]: E0512 12:54:45.771588 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.775415 kubelet[2619]: W0512 12:54:45.771605 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.775415 kubelet[2619]: E0512 12:54:45.775443 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.776115 kubelet[2619]: E0512 12:54:45.776050 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.776115 kubelet[2619]: W0512 12:54:45.776066 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.776115 kubelet[2619]: E0512 12:54:45.776079 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.776667 kubelet[2619]: E0512 12:54:45.776558 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.776931 kubelet[2619]: W0512 12:54:45.776831 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.776931 kubelet[2619]: E0512 12:54:45.776866 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.777193 kubelet[2619]: E0512 12:54:45.777179 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.777282 kubelet[2619]: W0512 12:54:45.777270 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.777493 kubelet[2619]: E0512 12:54:45.777341 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.777687 kubelet[2619]: E0512 12:54:45.777673 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.777865 kubelet[2619]: W0512 12:54:45.777714 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.777865 kubelet[2619]: E0512 12:54:45.777728 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.777991 kubelet[2619]: E0512 12:54:45.777979 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.778129 kubelet[2619]: W0512 12:54:45.778025 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.778129 kubelet[2619]: E0512 12:54:45.778037 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.778420 kubelet[2619]: E0512 12:54:45.778305 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.778420 kubelet[2619]: W0512 12:54:45.778320 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.778420 kubelet[2619]: E0512 12:54:45.778339 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.779111 kubelet[2619]: E0512 12:54:45.779035 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.779111 kubelet[2619]: W0512 12:54:45.779050 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.779111 kubelet[2619]: E0512 12:54:45.779060 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.779477 kubelet[2619]: E0512 12:54:45.779385 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.779477 kubelet[2619]: W0512 12:54:45.779398 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.779477 kubelet[2619]: E0512 12:54:45.779413 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.780286 kubelet[2619]: E0512 12:54:45.779965 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.780468 kubelet[2619]: W0512 12:54:45.780413 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.780468 kubelet[2619]: E0512 12:54:45.780432 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.780652 kubelet[2619]: E0512 12:54:45.780635 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.780694 kubelet[2619]: W0512 12:54:45.780652 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.780694 kubelet[2619]: E0512 12:54:45.780662 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.780977 kubelet[2619]: E0512 12:54:45.780962 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.780977 kubelet[2619]: W0512 12:54:45.780976 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.781050 kubelet[2619]: E0512 12:54:45.780987 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.781398 kubelet[2619]: E0512 12:54:45.781383 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.781439 kubelet[2619]: W0512 12:54:45.781398 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.781439 kubelet[2619]: E0512 12:54:45.781410 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.781755 kubelet[2619]: E0512 12:54:45.781732 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.781959 kubelet[2619]: W0512 12:54:45.781771 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.781959 kubelet[2619]: E0512 12:54:45.781784 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.782130 kubelet[2619]: E0512 12:54:45.781998 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.782130 kubelet[2619]: W0512 12:54:45.782007 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.782130 kubelet[2619]: E0512 12:54:45.782032 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.782468 kubelet[2619]: E0512 12:54:45.782197 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.782468 kubelet[2619]: W0512 12:54:45.782229 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.782468 kubelet[2619]: E0512 12:54:45.782239 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.782468 kubelet[2619]: E0512 12:54:45.782425 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.782468 kubelet[2619]: W0512 12:54:45.782434 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.782468 kubelet[2619]: E0512 12:54:45.782442 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.782637 kubelet[2619]: E0512 12:54:45.782577 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.782637 kubelet[2619]: W0512 12:54:45.782585 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.782637 kubelet[2619]: E0512 12:54:45.782592 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.782757 kubelet[2619]: E0512 12:54:45.782724 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.782757 kubelet[2619]: W0512 12:54:45.782730 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.782757 kubelet[2619]: E0512 12:54:45.782737 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.783034 kubelet[2619]: E0512 12:54:45.783017 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.783034 kubelet[2619]: W0512 12:54:45.783031 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.783099 kubelet[2619]: E0512 12:54:45.783040 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.783099 kubelet[2619]: I0512 12:54:45.783062 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/03a90fea-3da3-425f-9a3a-c6653e1060a7-varrun\") pod \"csi-node-driver-bhnrs\" (UID: \"03a90fea-3da3-425f-9a3a-c6653e1060a7\") " pod="calico-system/csi-node-driver-bhnrs" May 12 12:54:45.783287 kubelet[2619]: E0512 12:54:45.783264 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.783287 kubelet[2619]: W0512 12:54:45.783280 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.783287 kubelet[2619]: E0512 12:54:45.783289 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.783372 kubelet[2619]: I0512 12:54:45.783306 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03a90fea-3da3-425f-9a3a-c6653e1060a7-kubelet-dir\") pod \"csi-node-driver-bhnrs\" (UID: \"03a90fea-3da3-425f-9a3a-c6653e1060a7\") " pod="calico-system/csi-node-driver-bhnrs" May 12 12:54:45.783503 kubelet[2619]: E0512 12:54:45.783482 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.783503 kubelet[2619]: W0512 12:54:45.783497 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.783950 kubelet[2619]: E0512 12:54:45.783514 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.783950 kubelet[2619]: I0512 12:54:45.783529 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03a90fea-3da3-425f-9a3a-c6653e1060a7-socket-dir\") pod \"csi-node-driver-bhnrs\" (UID: \"03a90fea-3da3-425f-9a3a-c6653e1060a7\") " pod="calico-system/csi-node-driver-bhnrs" May 12 12:54:45.784272 kubelet[2619]: E0512 12:54:45.784244 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.784272 kubelet[2619]: W0512 12:54:45.784264 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.784335 kubelet[2619]: E0512 12:54:45.784281 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.784335 kubelet[2619]: I0512 12:54:45.784298 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03a90fea-3da3-425f-9a3a-c6653e1060a7-registration-dir\") pod \"csi-node-driver-bhnrs\" (UID: \"03a90fea-3da3-425f-9a3a-c6653e1060a7\") " pod="calico-system/csi-node-driver-bhnrs" May 12 12:54:45.785223 kubelet[2619]: E0512 12:54:45.785200 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.785223 kubelet[2619]: W0512 12:54:45.785216 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.785632 kubelet[2619]: E0512 12:54:45.785237 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.785632 kubelet[2619]: I0512 12:54:45.785252 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9v8\" (UniqueName: \"kubernetes.io/projected/03a90fea-3da3-425f-9a3a-c6653e1060a7-kube-api-access-zg9v8\") pod \"csi-node-driver-bhnrs\" (UID: \"03a90fea-3da3-425f-9a3a-c6653e1060a7\") " pod="calico-system/csi-node-driver-bhnrs" May 12 12:54:45.785632 kubelet[2619]: E0512 12:54:45.785476 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.785632 kubelet[2619]: W0512 12:54:45.785489 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.785632 kubelet[2619]: E0512 12:54:45.785539 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.785735 kubelet[2619]: E0512 12:54:45.785679 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.785735 kubelet[2619]: W0512 12:54:45.785688 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.785787 kubelet[2619]: E0512 12:54:45.785769 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.785927 kubelet[2619]: E0512 12:54:45.785911 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.785927 kubelet[2619]: W0512 12:54:45.785923 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.786123 kubelet[2619]: E0512 12:54:45.786045 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.786327 kubelet[2619]: E0512 12:54:45.786313 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.786327 kubelet[2619]: W0512 12:54:45.786326 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.786377 kubelet[2619]: E0512 12:54:45.786358 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.786597 kubelet[2619]: E0512 12:54:45.786579 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.786707 kubelet[2619]: W0512 12:54:45.786688 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.786939 kubelet[2619]: E0512 12:54:45.786923 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.787209 kubelet[2619]: E0512 12:54:45.787193 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.787243 kubelet[2619]: W0512 12:54:45.787208 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.787243 kubelet[2619]: E0512 12:54:45.787219 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.787420 kubelet[2619]: E0512 12:54:45.787405 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.787420 kubelet[2619]: W0512 12:54:45.787415 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.787503 kubelet[2619]: E0512 12:54:45.787423 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.787595 kubelet[2619]: E0512 12:54:45.787574 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.787595 kubelet[2619]: W0512 12:54:45.787584 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.787595 kubelet[2619]: E0512 12:54:45.787592 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.787734 kubelet[2619]: E0512 12:54:45.787720 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.787734 kubelet[2619]: W0512 12:54:45.787732 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.787804 kubelet[2619]: E0512 12:54:45.787740 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.787905 kubelet[2619]: E0512 12:54:45.787893 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.787905 kubelet[2619]: W0512 12:54:45.787904 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.787951 kubelet[2619]: E0512 12:54:45.787913 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.809978 containerd[1503]: time="2025-05-12T12:54:45.809940747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xrd5n,Uid:f63aac50-b845-48c2-a535-916d18cefe45,Namespace:calico-system,Attempt:0,}" May 12 12:54:45.827474 containerd[1503]: time="2025-05-12T12:54:45.827435633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79d56d9996-zl87x,Uid:83820671-4605-4178-85c5-0a13d565d2fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff\"" May 12 12:54:45.829316 containerd[1503]: time="2025-05-12T12:54:45.829278897Z" level=info msg="connecting to shim a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f" address="unix:///run/containerd/s/52628b4a6b0dec39868b1b5803a303f378a9029b7ddadbbcdbd6d463839314b0" namespace=k8s.io protocol=ttrpc version=3 May 12 12:54:45.832446 containerd[1503]: time="2025-05-12T12:54:45.832421263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 12 12:54:45.851000 systemd[1]: Started cri-containerd-a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f.scope - libcontainer container a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f. May 12 12:54:45.874082 containerd[1503]: time="2025-05-12T12:54:45.874049474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xrd5n,Uid:f63aac50-b845-48c2-a535-916d18cefe45,Namespace:calico-system,Attempt:0,} returns sandbox id \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\"" May 12 12:54:45.886157 kubelet[2619]: E0512 12:54:45.886135 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.886265 kubelet[2619]: W0512 12:54:45.886250 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.886333 kubelet[2619]: E0512 12:54:45.886321 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.886571 kubelet[2619]: E0512 12:54:45.886557 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.886644 kubelet[2619]: W0512 12:54:45.886632 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.886709 kubelet[2619]: E0512 12:54:45.886698 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.886956 kubelet[2619]: E0512 12:54:45.886941 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.887031 kubelet[2619]: W0512 12:54:45.887020 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.887100 kubelet[2619]: E0512 12:54:45.887089 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.887297 kubelet[2619]: E0512 12:54:45.887278 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.887297 kubelet[2619]: W0512 12:54:45.887295 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.887362 kubelet[2619]: E0512 12:54:45.887313 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.887456 kubelet[2619]: E0512 12:54:45.887446 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.887456 kubelet[2619]: W0512 12:54:45.887456 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.887512 kubelet[2619]: E0512 12:54:45.887473 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.887614 kubelet[2619]: E0512 12:54:45.887603 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.887614 kubelet[2619]: W0512 12:54:45.887613 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.887671 kubelet[2619]: E0512 12:54:45.887625 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.887832 kubelet[2619]: E0512 12:54:45.887820 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.887832 kubelet[2619]: W0512 12:54:45.887831 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.887904 kubelet[2619]: E0512 12:54:45.887863 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888021 kubelet[2619]: E0512 12:54:45.888010 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888047 kubelet[2619]: W0512 12:54:45.888021 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888047 kubelet[2619]: E0512 12:54:45.888038 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888192 kubelet[2619]: E0512 12:54:45.888182 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888192 kubelet[2619]: W0512 12:54:45.888191 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888251 kubelet[2619]: E0512 12:54:45.888203 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888326 kubelet[2619]: E0512 12:54:45.888314 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888326 kubelet[2619]: W0512 12:54:45.888324 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888447 kubelet[2619]: E0512 12:54:45.888353 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888477 kubelet[2619]: E0512 12:54:45.888450 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888477 kubelet[2619]: W0512 12:54:45.888458 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888521 kubelet[2619]: E0512 12:54:45.888501 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888595 kubelet[2619]: E0512 12:54:45.888582 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888595 kubelet[2619]: W0512 12:54:45.888594 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888661 kubelet[2619]: E0512 12:54:45.888648 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888745 kubelet[2619]: E0512 12:54:45.888734 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888745 kubelet[2619]: W0512 12:54:45.888744 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888806 kubelet[2619]: E0512 12:54:45.888756 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.888912 kubelet[2619]: E0512 12:54:45.888900 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.888912 kubelet[2619]: W0512 12:54:45.888912 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.888967 kubelet[2619]: E0512 12:54:45.888925 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.889114 kubelet[2619]: E0512 12:54:45.889103 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.889114 kubelet[2619]: W0512 12:54:45.889114 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.889173 kubelet[2619]: E0512 12:54:45.889132 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.889285 kubelet[2619]: E0512 12:54:45.889275 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.889285 kubelet[2619]: W0512 12:54:45.889285 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.889331 kubelet[2619]: E0512 12:54:45.889296 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.889426 kubelet[2619]: E0512 12:54:45.889416 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.889426 kubelet[2619]: W0512 12:54:45.889425 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.889477 kubelet[2619]: E0512 12:54:45.889433 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.889572 kubelet[2619]: E0512 12:54:45.889561 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.889572 kubelet[2619]: W0512 12:54:45.889571 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.889726 kubelet[2619]: E0512 12:54:45.889582 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.889989 kubelet[2619]: E0512 12:54:45.889972 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.890070 kubelet[2619]: W0512 12:54:45.890057 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.890135 kubelet[2619]: E0512 12:54:45.890123 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.890374 kubelet[2619]: E0512 12:54:45.890340 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.890374 kubelet[2619]: W0512 12:54:45.890355 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.890374 kubelet[2619]: E0512 12:54:45.890369 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.890541 kubelet[2619]: E0512 12:54:45.890526 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.890541 kubelet[2619]: W0512 12:54:45.890537 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.890728 kubelet[2619]: E0512 12:54:45.890550 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.890845 kubelet[2619]: E0512 12:54:45.890822 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.890908 kubelet[2619]: W0512 12:54:45.890896 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.890965 kubelet[2619]: E0512 12:54:45.890954 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.891307 kubelet[2619]: E0512 12:54:45.891169 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.891307 kubelet[2619]: W0512 12:54:45.891182 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.891307 kubelet[2619]: E0512 12:54:45.891198 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.891491 kubelet[2619]: E0512 12:54:45.891477 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.891544 kubelet[2619]: W0512 12:54:45.891533 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.892610 kubelet[2619]: E0512 12:54:45.891683 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.892683 kubelet[2619]: E0512 12:54:45.892032 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.892683 kubelet[2619]: W0512 12:54:45.892626 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.892683 kubelet[2619]: E0512 12:54:45.892638 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:45.899022 kubelet[2619]: E0512 12:54:45.898947 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:45.899022 kubelet[2619]: W0512 12:54:45.898966 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:45.899022 kubelet[2619]: E0512 12:54:45.898978 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:46.720249 kubelet[2619]: E0512 12:54:46.720190 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:47.499556 update_engine[1495]: I20250512 12:54:47.499498 1495 update_attempter.cc:509] Updating boot flags... May 12 12:54:48.720959 kubelet[2619]: E0512 12:54:48.720908 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:50.721866 kubelet[2619]: E0512 12:54:50.721117 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:51.047188 containerd[1503]: time="2025-05-12T12:54:51.047071683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:51.049487 containerd[1503]: time="2025-05-12T12:54:51.049441180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 12 12:54:51.051513 containerd[1503]: time="2025-05-12T12:54:51.050732175Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:51.056745 containerd[1503]: time="2025-05-12T12:54:51.056719402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:51.057658 containerd[1503]: time="2025-05-12T12:54:51.057626614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 5.225097743s" May 12 12:54:51.058014 containerd[1503]: time="2025-05-12T12:54:51.057992915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 12 12:54:51.059486 containerd[1503]: time="2025-05-12T12:54:51.059462520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 12 12:54:51.080414 containerd[1503]: time="2025-05-12T12:54:51.080379010Z" level=info msg="CreateContainer within sandbox \"d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 12 12:54:51.086675 containerd[1503]: time="2025-05-12T12:54:51.086642533Z" level=info msg="Container 728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:51.095787 containerd[1503]: time="2025-05-12T12:54:51.095745220Z" level=info msg="CreateContainer within sandbox \"d0943cdb8c8eee91b5473ae7ee68f400e34886630557319ffc6e4127cf5f2eff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4\"" May 12 12:54:51.096450 containerd[1503]: time="2025-05-12T12:54:51.096410658Z" level=info msg="StartContainer for \"728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4\"" May 12 12:54:51.097848 containerd[1503]: time="2025-05-12T12:54:51.097800499Z" level=info msg="connecting to shim 728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4" address="unix:///run/containerd/s/696002c6b2cabb6b128fffa82f1a4b00547acec582f701f68361daf24a01b90e" protocol=ttrpc version=3 May 12 12:54:51.120130 systemd[1]: Started cri-containerd-728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4.scope - libcontainer container 728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4. May 12 12:54:51.171087 containerd[1503]: time="2025-05-12T12:54:51.171005534Z" level=info msg="StartContainer for \"728593c31a30ec47846a514b3f24581c6cbf75570876d88e64a4df26432f33a4\" returns successfully" May 12 12:54:51.772597 kubelet[2619]: E0512 12:54:51.771932 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:54:51.784193 kubelet[2619]: I0512 12:54:51.784119 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79d56d9996-zl87x" podStartSLOduration=1.554249639 podStartE2EDuration="6.784105409s" podCreationTimestamp="2025-05-12 12:54:45 +0000 UTC" firstStartedPulling="2025-05-12 12:54:45.829013236 +0000 UTC m=+13.202337775" lastFinishedPulling="2025-05-12 12:54:51.058869006 +0000 UTC m=+18.432193545" observedRunningTime="2025-05-12 12:54:51.783390168 +0000 UTC m=+19.156714787" watchObservedRunningTime="2025-05-12 12:54:51.784105409 +0000 UTC m=+19.157429908" May 12 12:54:51.821006 kubelet[2619]: E0512 12:54:51.820971 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821096 kubelet[2619]: W0512 12:54:51.821011 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821096 kubelet[2619]: E0512 12:54:51.821043 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.821224 kubelet[2619]: E0512 12:54:51.821197 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821255 kubelet[2619]: W0512 12:54:51.821209 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821278 kubelet[2619]: E0512 12:54:51.821255 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.821424 kubelet[2619]: E0512 12:54:51.821401 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821424 kubelet[2619]: W0512 12:54:51.821414 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821424 kubelet[2619]: E0512 12:54:51.821423 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.821574 kubelet[2619]: E0512 12:54:51.821545 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821574 kubelet[2619]: W0512 12:54:51.821565 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821574 kubelet[2619]: E0512 12:54:51.821572 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.821720 kubelet[2619]: E0512 12:54:51.821701 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821747 kubelet[2619]: W0512 12:54:51.821719 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821747 kubelet[2619]: E0512 12:54:51.821726 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.821885 kubelet[2619]: E0512 12:54:51.821874 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.821912 kubelet[2619]: W0512 12:54:51.821885 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.821912 kubelet[2619]: E0512 12:54:51.821892 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822026 kubelet[2619]: E0512 12:54:51.822015 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822050 kubelet[2619]: W0512 12:54:51.822028 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822050 kubelet[2619]: E0512 12:54:51.822035 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822190 kubelet[2619]: E0512 12:54:51.822172 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822190 kubelet[2619]: W0512 12:54:51.822184 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822236 kubelet[2619]: E0512 12:54:51.822191 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822345 kubelet[2619]: E0512 12:54:51.822334 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822369 kubelet[2619]: W0512 12:54:51.822345 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822369 kubelet[2619]: E0512 12:54:51.822352 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822487 kubelet[2619]: E0512 12:54:51.822477 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822510 kubelet[2619]: W0512 12:54:51.822486 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822510 kubelet[2619]: E0512 12:54:51.822494 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822628 kubelet[2619]: E0512 12:54:51.822610 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822651 kubelet[2619]: W0512 12:54:51.822628 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822651 kubelet[2619]: E0512 12:54:51.822636 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822759 kubelet[2619]: E0512 12:54:51.822749 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.822793 kubelet[2619]: W0512 12:54:51.822759 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.822793 kubelet[2619]: E0512 12:54:51.822766 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.822980 kubelet[2619]: E0512 12:54:51.822966 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.823007 kubelet[2619]: W0512 12:54:51.822979 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.823007 kubelet[2619]: E0512 12:54:51.822988 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.823138 kubelet[2619]: E0512 12:54:51.823128 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.823166 kubelet[2619]: W0512 12:54:51.823138 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.823166 kubelet[2619]: E0512 12:54:51.823145 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.823287 kubelet[2619]: E0512 12:54:51.823277 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.823310 kubelet[2619]: W0512 12:54:51.823287 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.823310 kubelet[2619]: E0512 12:54:51.823303 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.832664 kubelet[2619]: E0512 12:54:51.832631 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.832664 kubelet[2619]: W0512 12:54:51.832650 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.832664 kubelet[2619]: E0512 12:54:51.832665 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.832858 kubelet[2619]: E0512 12:54:51.832829 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.832858 kubelet[2619]: W0512 12:54:51.832858 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.832913 kubelet[2619]: E0512 12:54:51.832873 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.833107 kubelet[2619]: E0512 12:54:51.833083 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.833107 kubelet[2619]: W0512 12:54:51.833096 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.833107 kubelet[2619]: E0512 12:54:51.833109 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.833306 kubelet[2619]: E0512 12:54:51.833273 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.833306 kubelet[2619]: W0512 12:54:51.833286 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.833306 kubelet[2619]: E0512 12:54:51.833300 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.833444 kubelet[2619]: E0512 12:54:51.833431 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.833444 kubelet[2619]: W0512 12:54:51.833442 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.833513 kubelet[2619]: E0512 12:54:51.833457 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.833580 kubelet[2619]: E0512 12:54:51.833569 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.833606 kubelet[2619]: W0512 12:54:51.833579 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.833606 kubelet[2619]: E0512 12:54:51.833591 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.833768 kubelet[2619]: E0512 12:54:51.833755 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.833768 kubelet[2619]: W0512 12:54:51.833766 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.833827 kubelet[2619]: E0512 12:54:51.833780 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.834159 kubelet[2619]: E0512 12:54:51.834035 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.834159 kubelet[2619]: W0512 12:54:51.834055 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.834159 kubelet[2619]: E0512 12:54:51.834073 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.834322 kubelet[2619]: E0512 12:54:51.834310 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.834382 kubelet[2619]: W0512 12:54:51.834370 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.834493 kubelet[2619]: E0512 12:54:51.834466 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.834731 kubelet[2619]: E0512 12:54:51.834639 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.834731 kubelet[2619]: W0512 12:54:51.834651 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.834731 kubelet[2619]: E0512 12:54:51.834674 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.834903 kubelet[2619]: E0512 12:54:51.834890 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.835078 kubelet[2619]: W0512 12:54:51.834957 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.835078 kubelet[2619]: E0512 12:54:51.834980 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.835216 kubelet[2619]: E0512 12:54:51.835204 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.835272 kubelet[2619]: W0512 12:54:51.835262 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.835341 kubelet[2619]: E0512 12:54:51.835329 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.835506 kubelet[2619]: E0512 12:54:51.835469 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.835506 kubelet[2619]: W0512 12:54:51.835485 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.835506 kubelet[2619]: E0512 12:54:51.835498 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.835621 kubelet[2619]: E0512 12:54:51.835610 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.835621 kubelet[2619]: W0512 12:54:51.835619 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.835706 kubelet[2619]: E0512 12:54:51.835630 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.835792 kubelet[2619]: E0512 12:54:51.835779 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.835792 kubelet[2619]: W0512 12:54:51.835790 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.835856 kubelet[2619]: E0512 12:54:51.835802 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.836071 kubelet[2619]: E0512 12:54:51.836054 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.836071 kubelet[2619]: W0512 12:54:51.836070 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.836127 kubelet[2619]: E0512 12:54:51.836086 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.836313 kubelet[2619]: E0512 12:54:51.836299 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.836313 kubelet[2619]: W0512 12:54:51.836312 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.836366 kubelet[2619]: E0512 12:54:51.836326 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:51.836483 kubelet[2619]: E0512 12:54:51.836472 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:51.836509 kubelet[2619]: W0512 12:54:51.836483 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:51.836509 kubelet[2619]: E0512 12:54:51.836491 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.721117 kubelet[2619]: E0512 12:54:52.721071 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:52.773676 kubelet[2619]: I0512 12:54:52.773574 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 12:54:52.774373 kubelet[2619]: E0512 12:54:52.774324 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:54:52.831370 kubelet[2619]: E0512 12:54:52.831348 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.831370 kubelet[2619]: W0512 12:54:52.831367 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.831489 kubelet[2619]: E0512 12:54:52.831383 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.831557 kubelet[2619]: E0512 12:54:52.831544 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.831557 kubelet[2619]: W0512 12:54:52.831554 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.831617 kubelet[2619]: E0512 12:54:52.831562 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.831699 kubelet[2619]: E0512 12:54:52.831688 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.831699 kubelet[2619]: W0512 12:54:52.831698 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.831753 kubelet[2619]: E0512 12:54:52.831706 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.831830 kubelet[2619]: E0512 12:54:52.831820 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.831830 kubelet[2619]: W0512 12:54:52.831829 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.831895 kubelet[2619]: E0512 12:54:52.831851 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.831985 kubelet[2619]: E0512 12:54:52.831974 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832020 kubelet[2619]: W0512 12:54:52.831985 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832020 kubelet[2619]: E0512 12:54:52.831994 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832122 kubelet[2619]: E0512 12:54:52.832112 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832122 kubelet[2619]: W0512 12:54:52.832121 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832170 kubelet[2619]: E0512 12:54:52.832129 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832257 kubelet[2619]: E0512 12:54:52.832245 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832257 kubelet[2619]: W0512 12:54:52.832255 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832321 kubelet[2619]: E0512 12:54:52.832262 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832387 kubelet[2619]: E0512 12:54:52.832375 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832387 kubelet[2619]: W0512 12:54:52.832385 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832437 kubelet[2619]: E0512 12:54:52.832393 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832523 kubelet[2619]: E0512 12:54:52.832512 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832523 kubelet[2619]: W0512 12:54:52.832522 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832643 kubelet[2619]: E0512 12:54:52.832529 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832643 kubelet[2619]: E0512 12:54:52.832636 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832643 kubelet[2619]: W0512 12:54:52.832643 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832718 kubelet[2619]: E0512 12:54:52.832650 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832768 kubelet[2619]: E0512 12:54:52.832757 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832768 kubelet[2619]: W0512 12:54:52.832766 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832816 kubelet[2619]: E0512 12:54:52.832775 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.832901 kubelet[2619]: E0512 12:54:52.832889 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.832901 kubelet[2619]: W0512 12:54:52.832899 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.832971 kubelet[2619]: E0512 12:54:52.832906 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.833037 kubelet[2619]: E0512 12:54:52.833026 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.833037 kubelet[2619]: W0512 12:54:52.833036 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.833083 kubelet[2619]: E0512 12:54:52.833043 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.833161 kubelet[2619]: E0512 12:54:52.833150 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.833161 kubelet[2619]: W0512 12:54:52.833160 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.833232 kubelet[2619]: E0512 12:54:52.833166 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.833297 kubelet[2619]: E0512 12:54:52.833286 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.833297 kubelet[2619]: W0512 12:54:52.833295 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.833349 kubelet[2619]: E0512 12:54:52.833303 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.840595 kubelet[2619]: E0512 12:54:52.840577 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.840595 kubelet[2619]: W0512 12:54:52.840593 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.840674 kubelet[2619]: E0512 12:54:52.840606 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.840789 kubelet[2619]: E0512 12:54:52.840778 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.840789 kubelet[2619]: W0512 12:54:52.840788 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.840859 kubelet[2619]: E0512 12:54:52.840802 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.840970 kubelet[2619]: E0512 12:54:52.840954 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.840970 kubelet[2619]: W0512 12:54:52.840966 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841020 kubelet[2619]: E0512 12:54:52.840980 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.841168 kubelet[2619]: E0512 12:54:52.841157 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.841168 kubelet[2619]: W0512 12:54:52.841167 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841242 kubelet[2619]: E0512 12:54:52.841182 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.841350 kubelet[2619]: E0512 12:54:52.841336 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.841350 kubelet[2619]: W0512 12:54:52.841347 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841392 kubelet[2619]: E0512 12:54:52.841359 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.841488 kubelet[2619]: E0512 12:54:52.841477 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.841488 kubelet[2619]: W0512 12:54:52.841487 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841488 kubelet[2619]: E0512 12:54:52.841498 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.841666 kubelet[2619]: E0512 12:54:52.841655 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.841666 kubelet[2619]: W0512 12:54:52.841665 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841718 kubelet[2619]: E0512 12:54:52.841678 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.841907 kubelet[2619]: E0512 12:54:52.841890 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.841907 kubelet[2619]: W0512 12:54:52.841906 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.841968 kubelet[2619]: E0512 12:54:52.841925 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842062 kubelet[2619]: E0512 12:54:52.842050 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842062 kubelet[2619]: W0512 12:54:52.842062 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842108 kubelet[2619]: E0512 12:54:52.842082 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842194 kubelet[2619]: E0512 12:54:52.842183 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842194 kubelet[2619]: W0512 12:54:52.842193 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842250 kubelet[2619]: E0512 12:54:52.842229 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842330 kubelet[2619]: E0512 12:54:52.842319 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842351 kubelet[2619]: W0512 12:54:52.842329 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842351 kubelet[2619]: E0512 12:54:52.842342 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842493 kubelet[2619]: E0512 12:54:52.842481 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842493 kubelet[2619]: W0512 12:54:52.842492 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842545 kubelet[2619]: E0512 12:54:52.842503 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842658 kubelet[2619]: E0512 12:54:52.842645 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842658 kubelet[2619]: W0512 12:54:52.842656 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842701 kubelet[2619]: E0512 12:54:52.842669 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.842887 kubelet[2619]: E0512 12:54:52.842873 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.842914 kubelet[2619]: W0512 12:54:52.842886 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.842914 kubelet[2619]: E0512 12:54:52.842897 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.843035 kubelet[2619]: E0512 12:54:52.843024 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.843035 kubelet[2619]: W0512 12:54:52.843034 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.843080 kubelet[2619]: E0512 12:54:52.843041 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.843190 kubelet[2619]: E0512 12:54:52.843178 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.843190 kubelet[2619]: W0512 12:54:52.843188 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.843250 kubelet[2619]: E0512 12:54:52.843200 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.843456 kubelet[2619]: E0512 12:54:52.843424 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.843456 kubelet[2619]: W0512 12:54:52.843440 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.843498 kubelet[2619]: E0512 12:54:52.843456 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:52.843597 kubelet[2619]: E0512 12:54:52.843586 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 12:54:52.843618 kubelet[2619]: W0512 12:54:52.843598 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 12:54:52.843618 kubelet[2619]: E0512 12:54:52.843606 2619 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 12:54:54.326056 containerd[1503]: time="2025-05-12T12:54:54.325986042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:54.326645 containerd[1503]: time="2025-05-12T12:54:54.326603473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 12 12:54:54.327454 containerd[1503]: time="2025-05-12T12:54:54.327422794Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:54.329333 containerd[1503]: time="2025-05-12T12:54:54.329300049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:54.329983 containerd[1503]: time="2025-05-12T12:54:54.329954962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 3.270356433s" May 12 12:54:54.330153 containerd[1503]: time="2025-05-12T12:54:54.330056607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 12 12:54:54.331907 containerd[1503]: time="2025-05-12T12:54:54.331792654Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 12 12:54:54.351517 containerd[1503]: time="2025-05-12T12:54:54.351488125Z" level=info msg="Container 91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:54.357646 containerd[1503]: time="2025-05-12T12:54:54.357549990Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\"" May 12 12:54:54.357951 containerd[1503]: time="2025-05-12T12:54:54.357917249Z" level=info msg="StartContainer for \"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\"" May 12 12:54:54.359424 containerd[1503]: time="2025-05-12T12:54:54.359397963Z" level=info msg="connecting to shim 91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f" address="unix:///run/containerd/s/52628b4a6b0dec39868b1b5803a303f378a9029b7ddadbbcdbd6d463839314b0" protocol=ttrpc version=3 May 12 12:54:54.377003 systemd[1]: Started cri-containerd-91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f.scope - libcontainer container 91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f. May 12 12:54:54.455651 containerd[1503]: time="2025-05-12T12:54:54.454034806Z" level=info msg="StartContainer for \"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\" returns successfully" May 12 12:54:54.475939 systemd[1]: cri-containerd-91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f.scope: Deactivated successfully. May 12 12:54:54.490294 containerd[1503]: time="2025-05-12T12:54:54.490241948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\" id:\"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\" pid:3333 exited_at:{seconds:1747054494 nanos:482408193}" May 12 12:54:54.490294 containerd[1503]: time="2025-05-12T12:54:54.490242348Z" level=info msg="received exit event container_id:\"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\" id:\"91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f\" pid:3333 exited_at:{seconds:1747054494 nanos:482408193}" May 12 12:54:54.521745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d6efcbb31f6984e9e26c817a9568c280aa62315223aee15e90ec8af7afb22f-rootfs.mount: Deactivated successfully. May 12 12:54:54.720491 kubelet[2619]: E0512 12:54:54.720182 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:54.779094 kubelet[2619]: E0512 12:54:54.779069 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:54:54.780694 containerd[1503]: time="2025-05-12T12:54:54.780655362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 12 12:54:56.720583 kubelet[2619]: E0512 12:54:56.720495 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:58.721407 kubelet[2619]: E0512 12:54:58.721043 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:54:59.188600 containerd[1503]: time="2025-05-12T12:54:59.188471475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:59.189163 containerd[1503]: time="2025-05-12T12:54:59.188834090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 12 12:54:59.189621 containerd[1503]: time="2025-05-12T12:54:59.189597641Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:59.191459 containerd[1503]: time="2025-05-12T12:54:59.191428955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:54:59.192722 containerd[1503]: time="2025-05-12T12:54:59.192676846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.411975041s" May 12 12:54:59.192722 containerd[1503]: time="2025-05-12T12:54:59.192709367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 12 12:54:59.200031 containerd[1503]: time="2025-05-12T12:54:59.199991782Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 12 12:54:59.207010 containerd[1503]: time="2025-05-12T12:54:59.206959025Z" level=info msg="Container 1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2: CDI devices from CRI Config.CDIDevices: []" May 12 12:54:59.215468 containerd[1503]: time="2025-05-12T12:54:59.215431889Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\"" May 12 12:54:59.216046 containerd[1503]: time="2025-05-12T12:54:59.215993952Z" level=info msg="StartContainer for \"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\"" May 12 12:54:59.217895 containerd[1503]: time="2025-05-12T12:54:59.217830907Z" level=info msg="connecting to shim 1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2" address="unix:///run/containerd/s/52628b4a6b0dec39868b1b5803a303f378a9029b7ddadbbcdbd6d463839314b0" protocol=ttrpc version=3 May 12 12:54:59.246062 systemd[1]: Started cri-containerd-1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2.scope - libcontainer container 1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2. May 12 12:54:59.317591 containerd[1503]: time="2025-05-12T12:54:59.317546874Z" level=info msg="StartContainer for \"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\" returns successfully" May 12 12:54:59.790225 kubelet[2619]: E0512 12:54:59.790189 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:54:59.817928 systemd[1]: cri-containerd-1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2.scope: Deactivated successfully. May 12 12:54:59.819026 systemd[1]: cri-containerd-1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2.scope: Consumed 434ms CPU time, 161.6M memory peak, 4K read from disk, 150.3M written to disk. May 12 12:54:59.819313 containerd[1503]: time="2025-05-12T12:54:59.819251319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\" id:\"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\" pid:3393 exited_at:{seconds:1747054499 nanos:818859943}" May 12 12:54:59.825647 containerd[1503]: time="2025-05-12T12:54:59.825564615Z" level=info msg="received exit event container_id:\"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\" id:\"1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2\" pid:3393 exited_at:{seconds:1747054499 nanos:818859943}" May 12 12:54:59.842977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1589405a40ba2c6ff176862fda456117ff77791e84d47c6e2c19deb0fc0566a2-rootfs.mount: Deactivated successfully. May 12 12:54:59.899667 kubelet[2619]: I0512 12:54:59.899119 2619 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 12 12:54:59.965749 systemd[1]: Created slice kubepods-burstable-pod4f83f5f2_a16e_4509_bf34_7c2d09c96729.slice - libcontainer container kubepods-burstable-pod4f83f5f2_a16e_4509_bf34_7c2d09c96729.slice. May 12 12:54:59.971424 systemd[1]: Created slice kubepods-besteffort-pod8a9a341a_24f2_45d4_85e0_a0a9e8344fb0.slice - libcontainer container kubepods-besteffort-pod8a9a341a_24f2_45d4_85e0_a0a9e8344fb0.slice. May 12 12:54:59.977087 systemd[1]: Created slice kubepods-besteffort-pod56d5ae46_32c0_49a5_b52c_500efd0dd389.slice - libcontainer container kubepods-besteffort-pod56d5ae46_32c0_49a5_b52c_500efd0dd389.slice. May 12 12:54:59.983632 systemd[1]: Created slice kubepods-besteffort-pod9100eda0_fbd2_4ecb_b7dd_888b6a593939.slice - libcontainer container kubepods-besteffort-pod9100eda0_fbd2_4ecb_b7dd_888b6a593939.slice. May 12 12:54:59.986496 kubelet[2619]: I0512 12:54:59.985476 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f83f5f2-a16e-4509-bf34-7c2d09c96729-config-volume\") pod \"coredns-668d6bf9bc-b5m6q\" (UID: \"4f83f5f2-a16e-4509-bf34-7c2d09c96729\") " pod="kube-system/coredns-668d6bf9bc-b5m6q" May 12 12:54:59.986496 kubelet[2619]: I0512 12:54:59.985549 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d5ae46-32c0-49a5-b52c-500efd0dd389-tigera-ca-bundle\") pod \"calico-kube-controllers-566cf88f5c-jz7rv\" (UID: \"56d5ae46-32c0-49a5-b52c-500efd0dd389\") " pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" May 12 12:54:59.986496 kubelet[2619]: I0512 12:54:59.985636 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgwz\" (UniqueName: \"kubernetes.io/projected/54efad5f-19f5-42cb-8b91-b8f69a7b84ed-kube-api-access-5mgwz\") pod \"coredns-668d6bf9bc-d9lnq\" (UID: \"54efad5f-19f5-42cb-8b91-b8f69a7b84ed\") " pod="kube-system/coredns-668d6bf9bc-d9lnq" May 12 12:54:59.986496 kubelet[2619]: I0512 12:54:59.985657 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54efad5f-19f5-42cb-8b91-b8f69a7b84ed-config-volume\") pod \"coredns-668d6bf9bc-d9lnq\" (UID: \"54efad5f-19f5-42cb-8b91-b8f69a7b84ed\") " pod="kube-system/coredns-668d6bf9bc-d9lnq" May 12 12:54:59.986496 kubelet[2619]: I0512 12:54:59.986062 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28cjs\" (UniqueName: \"kubernetes.io/projected/4f83f5f2-a16e-4509-bf34-7c2d09c96729-kube-api-access-28cjs\") pod \"coredns-668d6bf9bc-b5m6q\" (UID: \"4f83f5f2-a16e-4509-bf34-7c2d09c96729\") " pod="kube-system/coredns-668d6bf9bc-b5m6q" May 12 12:54:59.986713 kubelet[2619]: I0512 12:54:59.986090 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nlx\" (UniqueName: \"kubernetes.io/projected/56d5ae46-32c0-49a5-b52c-500efd0dd389-kube-api-access-w4nlx\") pod \"calico-kube-controllers-566cf88f5c-jz7rv\" (UID: \"56d5ae46-32c0-49a5-b52c-500efd0dd389\") " pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" May 12 12:54:59.986713 kubelet[2619]: I0512 12:54:59.986122 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a9a341a-24f2-45d4-85e0-a0a9e8344fb0-calico-apiserver-certs\") pod \"calico-apiserver-5f97f8bfd5-b48bt\" (UID: \"8a9a341a-24f2-45d4-85e0-a0a9e8344fb0\") " pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" May 12 12:54:59.986713 kubelet[2619]: I0512 12:54:59.986138 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9100eda0-fbd2-4ecb-b7dd-888b6a593939-calico-apiserver-certs\") pod \"calico-apiserver-5f97f8bfd5-wzcg4\" (UID: \"9100eda0-fbd2-4ecb-b7dd-888b6a593939\") " pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" May 12 12:54:59.986713 kubelet[2619]: I0512 12:54:59.986154 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkf9b\" (UniqueName: \"kubernetes.io/projected/9100eda0-fbd2-4ecb-b7dd-888b6a593939-kube-api-access-mkf9b\") pod \"calico-apiserver-5f97f8bfd5-wzcg4\" (UID: \"9100eda0-fbd2-4ecb-b7dd-888b6a593939\") " pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" May 12 12:54:59.986713 kubelet[2619]: I0512 12:54:59.986179 2619 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2j47\" (UniqueName: \"kubernetes.io/projected/8a9a341a-24f2-45d4-85e0-a0a9e8344fb0-kube-api-access-g2j47\") pod \"calico-apiserver-5f97f8bfd5-b48bt\" (UID: \"8a9a341a-24f2-45d4-85e0-a0a9e8344fb0\") " pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" May 12 12:54:59.989396 systemd[1]: Created slice kubepods-burstable-pod54efad5f_19f5_42cb_8b91_b8f69a7b84ed.slice - libcontainer container kubepods-burstable-pod54efad5f_19f5_42cb_8b91_b8f69a7b84ed.slice. May 12 12:55:00.268722 kubelet[2619]: E0512 12:55:00.268673 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:00.270017 containerd[1503]: time="2025-05-12T12:55:00.269978584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b5m6q,Uid:4f83f5f2-a16e-4509-bf34-7c2d09c96729,Namespace:kube-system,Attempt:0,}" May 12 12:55:00.274275 containerd[1503]: time="2025-05-12T12:55:00.274236950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-b48bt,Uid:8a9a341a-24f2-45d4-85e0-a0a9e8344fb0,Namespace:calico-apiserver,Attempt:0,}" May 12 12:55:00.279962 containerd[1503]: time="2025-05-12T12:55:00.279889451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566cf88f5c-jz7rv,Uid:56d5ae46-32c0-49a5-b52c-500efd0dd389,Namespace:calico-system,Attempt:0,}" May 12 12:55:00.296106 containerd[1503]: time="2025-05-12T12:55:00.291427180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-wzcg4,Uid:9100eda0-fbd2-4ecb-b7dd-888b6a593939,Namespace:calico-apiserver,Attempt:0,}" May 12 12:55:00.300898 kubelet[2619]: E0512 12:55:00.298810 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:00.339935 containerd[1503]: time="2025-05-12T12:55:00.339900031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9lnq,Uid:54efad5f-19f5-42cb-8b91-b8f69a7b84ed,Namespace:kube-system,Attempt:0,}" May 12 12:55:00.684825 containerd[1503]: time="2025-05-12T12:55:00.684710436Z" level=error msg="Failed to destroy network for sandbox \"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.686198 containerd[1503]: time="2025-05-12T12:55:00.686158052Z" level=error msg="Failed to destroy network for sandbox \"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.687102 containerd[1503]: time="2025-05-12T12:55:00.687067648Z" level=error msg="Failed to destroy network for sandbox \"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.690447 containerd[1503]: time="2025-05-12T12:55:00.690388257Z" level=error msg="Failed to destroy network for sandbox \"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.695763 containerd[1503]: time="2025-05-12T12:55:00.695712265Z" level=error msg="Failed to destroy network for sandbox \"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.697760 containerd[1503]: time="2025-05-12T12:55:00.697715783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b5m6q,Uid:4f83f5f2-a16e-4509-bf34-7c2d09c96729,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.702446 kubelet[2619]: E0512 12:55:00.702310 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.705671 kubelet[2619]: E0512 12:55:00.705638 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b5m6q" May 12 12:55:00.705785 kubelet[2619]: E0512 12:55:00.705768 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b5m6q" May 12 12:55:00.705923 kubelet[2619]: E0512 12:55:00.705895 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b5m6q_kube-system(4f83f5f2-a16e-4509-bf34-7c2d09c96729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b5m6q_kube-system(4f83f5f2-a16e-4509-bf34-7c2d09c96729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e9c91a4595cf227bdffe1a5401d08e522c415726ae43c500bceda82db4330ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b5m6q" podUID="4f83f5f2-a16e-4509-bf34-7c2d09c96729" May 12 12:55:00.710135 containerd[1503]: time="2025-05-12T12:55:00.710083985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566cf88f5c-jz7rv,Uid:56d5ae46-32c0-49a5-b52c-500efd0dd389,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.710345 kubelet[2619]: E0512 12:55:00.710320 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.710444 kubelet[2619]: E0512 12:55:00.710363 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" May 12 12:55:00.710444 kubelet[2619]: E0512 12:55:00.710381 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" May 12 12:55:00.710444 kubelet[2619]: E0512 12:55:00.710416 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-566cf88f5c-jz7rv_calico-system(56d5ae46-32c0-49a5-b52c-500efd0dd389)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-566cf88f5c-jz7rv_calico-system(56d5ae46-32c0-49a5-b52c-500efd0dd389)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd21e0b903761b4064e724bafd122d36f949a7456a009ff892d8115ae53e9bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" podUID="56d5ae46-32c0-49a5-b52c-500efd0dd389" May 12 12:55:00.722264 containerd[1503]: time="2025-05-12T12:55:00.722193578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9lnq,Uid:54efad5f-19f5-42cb-8b91-b8f69a7b84ed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.722788 kubelet[2619]: E0512 12:55:00.722740 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.722788 kubelet[2619]: E0512 12:55:00.722785 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d9lnq" May 12 12:55:00.722899 kubelet[2619]: E0512 12:55:00.722803 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d9lnq" May 12 12:55:00.723358 kubelet[2619]: E0512 12:55:00.722833 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d9lnq_kube-system(54efad5f-19f5-42cb-8b91-b8f69a7b84ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d9lnq_kube-system(54efad5f-19f5-42cb-8b91-b8f69a7b84ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c53268a87b648ffda932454794dcaac90f1d394aab5596713e174cb07c9009f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d9lnq" podUID="54efad5f-19f5-42cb-8b91-b8f69a7b84ed" May 12 12:55:00.726477 systemd[1]: Created slice kubepods-besteffort-pod03a90fea_3da3_425f_9a3a_c6653e1060a7.slice - libcontainer container kubepods-besteffort-pod03a90fea_3da3_425f_9a3a_c6653e1060a7.slice. May 12 12:55:00.728623 containerd[1503]: time="2025-05-12T12:55:00.728579867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhnrs,Uid:03a90fea-3da3-425f-9a3a-c6653e1060a7,Namespace:calico-system,Attempt:0,}" May 12 12:55:00.734535 containerd[1503]: time="2025-05-12T12:55:00.734486737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-b48bt,Uid:8a9a341a-24f2-45d4-85e0-a0a9e8344fb0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.734931 kubelet[2619]: E0512 12:55:00.734664 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.734931 kubelet[2619]: E0512 12:55:00.734718 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" May 12 12:55:00.734931 kubelet[2619]: E0512 12:55:00.734734 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" May 12 12:55:00.735437 containerd[1503]: time="2025-05-12T12:55:00.735396172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-wzcg4,Uid:9100eda0-fbd2-4ecb-b7dd-888b6a593939,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.735658 kubelet[2619]: E0512 12:55:00.735623 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.735757 kubelet[2619]: E0512 12:55:00.735741 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" May 12 12:55:00.735968 kubelet[2619]: E0512 12:55:00.735871 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" May 12 12:55:00.736070 kubelet[2619]: E0512 12:55:00.736044 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f97f8bfd5-wzcg4_calico-apiserver(9100eda0-fbd2-4ecb-b7dd-888b6a593939)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f97f8bfd5-wzcg4_calico-apiserver(9100eda0-fbd2-4ecb-b7dd-888b6a593939)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45e4f9be9b5873617801fbaa3875d2eebb5531c78bb69b40535c8df9572d8e36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" podUID="9100eda0-fbd2-4ecb-b7dd-888b6a593939" May 12 12:55:00.737810 kubelet[2619]: E0512 12:55:00.734767 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f97f8bfd5-b48bt_calico-apiserver(8a9a341a-24f2-45d4-85e0-a0a9e8344fb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f97f8bfd5-b48bt_calico-apiserver(8a9a341a-24f2-45d4-85e0-a0a9e8344fb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91bcb6d88d3a2c28fd62a96bd609daffe098883b97bb7a5d4ba89897dfc081ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" podUID="8a9a341a-24f2-45d4-85e0-a0a9e8344fb0" May 12 12:55:00.779091 containerd[1503]: time="2025-05-12T12:55:00.779047635Z" level=error msg="Failed to destroy network for sandbox \"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.779991 containerd[1503]: time="2025-05-12T12:55:00.779957390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhnrs,Uid:03a90fea-3da3-425f-9a3a-c6653e1060a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.780226 kubelet[2619]: E0512 12:55:00.780169 2619 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 12:55:00.780276 kubelet[2619]: E0512 12:55:00.780239 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhnrs" May 12 12:55:00.780276 kubelet[2619]: E0512 12:55:00.780264 2619 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhnrs" May 12 12:55:00.780332 kubelet[2619]: E0512 12:55:00.780305 2619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bhnrs_calico-system(03a90fea-3da3-425f-9a3a-c6653e1060a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bhnrs_calico-system(03a90fea-3da3-425f-9a3a-c6653e1060a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0d5f8afb77525149a8b4cc226bfe52816de8a0359c5b6c74457054680887bde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhnrs" podUID="03a90fea-3da3-425f-9a3a-c6653e1060a7" May 12 12:55:00.795021 kubelet[2619]: E0512 12:55:00.794979 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:00.796692 containerd[1503]: time="2025-05-12T12:55:00.796664041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 12 12:55:01.208612 systemd[1]: run-netns-cni\x2d1b629414\x2d23e8\x2d716f\x2d85e8\x2d00003d8339e3.mount: Deactivated successfully. May 12 12:55:01.208701 systemd[1]: run-netns-cni\x2d091072cb\x2d7b8a\x2d6b99\x2d182b\x2d3ec26dc422c1.mount: Deactivated successfully. May 12 12:55:01.208746 systemd[1]: run-netns-cni\x2dd481e6a2\x2d69ff\x2d0ce3\x2dd1c1\x2dcf964ee8330c.mount: Deactivated successfully. May 12 12:55:01.208790 systemd[1]: run-netns-cni\x2d33319f17\x2df2bd\x2de408\x2d9e52\x2d975a13165c9b.mount: Deactivated successfully. May 12 12:55:02.550331 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:44922.service - OpenSSH per-connection server daemon (10.0.0.1:44922). May 12 12:55:02.610587 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 44922 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:02.611828 sshd-session[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:02.615892 systemd-logind[1493]: New session 8 of user core. May 12 12:55:02.626017 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 12:55:02.748305 sshd[3653]: Connection closed by 10.0.0.1 port 44922 May 12 12:55:02.748809 sshd-session[3651]: pam_unix(sshd:session): session closed for user core May 12 12:55:02.752956 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:44922.service: Deactivated successfully. May 12 12:55:02.754824 systemd[1]: session-8.scope: Deactivated successfully. May 12 12:55:02.756428 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. May 12 12:55:02.759104 systemd-logind[1493]: Removed session 8. May 12 12:55:07.066325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380914068.mount: Deactivated successfully. May 12 12:55:07.233886 containerd[1503]: time="2025-05-12T12:55:07.233742743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:07.234347 containerd[1503]: time="2025-05-12T12:55:07.234294200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 12 12:55:07.235007 containerd[1503]: time="2025-05-12T12:55:07.234974021Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:07.236746 containerd[1503]: time="2025-05-12T12:55:07.236713153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:07.237174 containerd[1503]: time="2025-05-12T12:55:07.237143486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.440444044s" May 12 12:55:07.237206 containerd[1503]: time="2025-05-12T12:55:07.237176087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 12 12:55:07.244855 containerd[1503]: time="2025-05-12T12:55:07.244814319Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 12 12:55:07.255566 containerd[1503]: time="2025-05-12T12:55:07.255524283Z" level=info msg="Container 22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:07.267388 containerd[1503]: time="2025-05-12T12:55:07.267337081Z" level=info msg="CreateContainer within sandbox \"a717757e2a2a9d899e0f94e0951713c121a00446603739fc5376395cddb0c71f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\"" May 12 12:55:07.268112 containerd[1503]: time="2025-05-12T12:55:07.268060823Z" level=info msg="StartContainer for \"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\"" May 12 12:55:07.269912 containerd[1503]: time="2025-05-12T12:55:07.269877478Z" level=info msg="connecting to shim 22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419" address="unix:///run/containerd/s/52628b4a6b0dec39868b1b5803a303f378a9029b7ddadbbcdbd6d463839314b0" protocol=ttrpc version=3 May 12 12:55:07.298993 systemd[1]: Started cri-containerd-22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419.scope - libcontainer container 22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419. May 12 12:55:07.332916 containerd[1503]: time="2025-05-12T12:55:07.332100802Z" level=info msg="StartContainer for \"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\" returns successfully" May 12 12:55:07.518899 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 12 12:55:07.518984 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 12 12:55:07.769506 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:44928.service - OpenSSH per-connection server daemon (10.0.0.1:44928). May 12 12:55:07.820941 kubelet[2619]: E0512 12:55:07.820668 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:07.824993 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 44928 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:07.826401 sshd-session[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:07.832511 systemd-logind[1493]: New session 9 of user core. May 12 12:55:07.843024 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 12:55:07.922506 containerd[1503]: time="2025-05-12T12:55:07.922460521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\" id:\"56f0f3f756de99b318fa8e24108577d000d61d6c0089c83cf95f0cf8ed139a9c\" pid:3755 exit_status:1 exited_at:{seconds:1747054507 nanos:922016628}" May 12 12:55:07.959661 sshd[3741]: Connection closed by 10.0.0.1 port 44928 May 12 12:55:07.960009 sshd-session[3739]: pam_unix(sshd:session): session closed for user core May 12 12:55:07.962701 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:44928.service: Deactivated successfully. May 12 12:55:07.964391 systemd[1]: session-9.scope: Deactivated successfully. May 12 12:55:07.966161 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. May 12 12:55:07.967029 systemd-logind[1493]: Removed session 9. May 12 12:55:08.820311 kubelet[2619]: E0512 12:55:08.820140 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:08.907798 containerd[1503]: time="2025-05-12T12:55:08.907756096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\" id:\"2cae0926ac114a92c4e69ebb77ff9f695b85e9c93b9a8a27c3e67773c31f5bd6\" pid:3888 exit_status:1 exited_at:{seconds:1747054508 nanos:907447327}" May 12 12:55:11.491044 kubelet[2619]: I0512 12:55:11.490997 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 12:55:11.491044 kubelet[2619]: E0512 12:55:11.491299 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:11.512980 kubelet[2619]: I0512 12:55:11.512884 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xrd5n" podStartSLOduration=5.148309832 podStartE2EDuration="26.512834244s" podCreationTimestamp="2025-05-12 12:54:45 +0000 UTC" firstStartedPulling="2025-05-12 12:54:45.874957265 +0000 UTC m=+13.248281804" lastFinishedPulling="2025-05-12 12:55:07.239481677 +0000 UTC m=+34.612806216" observedRunningTime="2025-05-12 12:55:07.851376888 +0000 UTC m=+35.224701427" watchObservedRunningTime="2025-05-12 12:55:11.512834244 +0000 UTC m=+38.886158783" May 12 12:55:11.721382 containerd[1503]: time="2025-05-12T12:55:11.721341795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-b48bt,Uid:8a9a341a-24f2-45d4-85e0-a0a9e8344fb0,Namespace:calico-apiserver,Attempt:0,}" May 12 12:55:11.825187 kubelet[2619]: E0512 12:55:11.825086 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:11.928206 systemd-networkd[1440]: calibbd5c29ae5a: Link UP May 12 12:55:11.928385 systemd-networkd[1440]: calibbd5c29ae5a: Gained carrier May 12 12:55:11.939626 containerd[1503]: 2025-05-12 12:55:11.742 [INFO][3957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 12 12:55:11.939626 containerd[1503]: 2025-05-12 12:55:11.787 [INFO][3957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0 calico-apiserver-5f97f8bfd5- calico-apiserver 8a9a341a-24f2-45d4-85e0-a0a9e8344fb0 681 0 2025-05-12 12:54:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f97f8bfd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f97f8bfd5-b48bt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbd5c29ae5a [] []}} ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-" May 12 12:55:11.939626 containerd[1503]: 2025-05-12 12:55:11.788 [INFO][3957] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.939626 containerd[1503]: 2025-05-12 12:55:11.878 [INFO][3974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" HandleID="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.895 [INFO][3974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" HandleID="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f97f8bfd5-b48bt", "timestamp":"2025-05-12 12:55:11.878135439 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.895 [INFO][3974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.895 [INFO][3974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.895 [INFO][3974] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.897 [INFO][3974] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" host="localhost" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.902 [INFO][3974] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.906 [INFO][3974] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.908 [INFO][3974] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.910 [INFO][3974] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:11.940973 containerd[1503]: 2025-05-12 12:55:11.910 [INFO][3974] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" host="localhost" May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.911 [INFO][3974] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90 May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.915 [INFO][3974] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" host="localhost" May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.919 [INFO][3974] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" host="localhost" May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.919 [INFO][3974] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" host="localhost" May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.919 [INFO][3974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:11.941185 containerd[1503]: 2025-05-12 12:55:11.919 [INFO][3974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" HandleID="k8s-pod-network.f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.941290 containerd[1503]: 2025-05-12 12:55:11.921 [INFO][3957] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0", GenerateName:"calico-apiserver-5f97f8bfd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a9a341a-24f2-45d4-85e0-a0a9e8344fb0", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8bfd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f97f8bfd5-b48bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbd5c29ae5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:11.941337 containerd[1503]: 2025-05-12 12:55:11.922 [INFO][3957] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.941337 containerd[1503]: 2025-05-12 12:55:11.922 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbd5c29ae5a ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.941337 containerd[1503]: 2025-05-12 12:55:11.929 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:11.941396 containerd[1503]: 2025-05-12 12:55:11.929 [INFO][3957] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0", GenerateName:"calico-apiserver-5f97f8bfd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a9a341a-24f2-45d4-85e0-a0a9e8344fb0", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8bfd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90", Pod:"calico-apiserver-5f97f8bfd5-b48bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbd5c29ae5a", MAC:"c6:d1:71:11:d2:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:11.941440 containerd[1503]: 2025-05-12 12:55:11.936 [INFO][3957] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-b48bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--b48bt-eth0" May 12 12:55:12.002862 containerd[1503]: time="2025-05-12T12:55:12.001486066Z" level=info msg="connecting to shim f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90" address="unix:///run/containerd/s/57c988ba88fcf8aa326bde034e8c0b19efb727c6b465411dea1c5fe098da1775" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:12.038036 systemd[1]: Started cri-containerd-f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90.scope - libcontainer container f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90. May 12 12:55:12.069803 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:12.175867 containerd[1503]: time="2025-05-12T12:55:12.175220637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-b48bt,Uid:8a9a341a-24f2-45d4-85e0-a0a9e8344fb0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90\"" May 12 12:55:12.179030 containerd[1503]: time="2025-05-12T12:55:12.178999856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 12:55:12.311182 systemd-networkd[1440]: vxlan.calico: Link UP May 12 12:55:12.311192 systemd-networkd[1440]: vxlan.calico: Gained carrier May 12 12:55:12.975186 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:51764.service - OpenSSH per-connection server daemon (10.0.0.1:51764). May 12 12:55:13.032080 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 51764 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:13.033318 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:13.037008 systemd-logind[1493]: New session 10 of user core. May 12 12:55:13.048992 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 12:55:13.161864 sshd[4171]: Connection closed by 10.0.0.1 port 51764 May 12 12:55:13.162079 sshd-session[4169]: pam_unix(sshd:session): session closed for user core May 12 12:55:13.165284 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:51764.service: Deactivated successfully. May 12 12:55:13.166798 systemd[1]: session-10.scope: Deactivated successfully. May 12 12:55:13.167588 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. May 12 12:55:13.168936 systemd-logind[1493]: Removed session 10. May 12 12:55:13.274107 systemd-networkd[1440]: calibbd5c29ae5a: Gained IPv6LL May 12 12:55:13.725775 kubelet[2619]: E0512 12:55:13.725638 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:13.726963 containerd[1503]: time="2025-05-12T12:55:13.726065581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-wzcg4,Uid:9100eda0-fbd2-4ecb-b7dd-888b6a593939,Namespace:calico-apiserver,Attempt:0,}" May 12 12:55:13.726963 containerd[1503]: time="2025-05-12T12:55:13.726348188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9lnq,Uid:54efad5f-19f5-42cb-8b91-b8f69a7b84ed,Namespace:kube-system,Attempt:0,}" May 12 12:55:13.726963 containerd[1503]: time="2025-05-12T12:55:13.726462671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhnrs,Uid:03a90fea-3da3-425f-9a3a-c6653e1060a7,Namespace:calico-system,Attempt:0,}" May 12 12:55:13.865546 systemd-networkd[1440]: cali1315288ffd1: Link UP May 12 12:55:13.866445 systemd-networkd[1440]: cali1315288ffd1: Gained carrier May 12 12:55:13.876642 containerd[1503]: 2025-05-12 12:55:13.777 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0 calico-apiserver-5f97f8bfd5- calico-apiserver 9100eda0-fbd2-4ecb-b7dd-888b6a593939 680 0 2025-05-12 12:54:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f97f8bfd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f97f8bfd5-wzcg4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1315288ffd1 [] []}} ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-" May 12 12:55:13.876642 containerd[1503]: 2025-05-12 12:55:13.777 [INFO][4184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.876642 containerd[1503]: 2025-05-12 12:55:13.809 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" HandleID="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.828 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" HandleID="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f97f8bfd5-wzcg4", "timestamp":"2025-05-12 12:55:13.809976031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.828 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.828 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.828 [INFO][4233] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.832 [INFO][4233] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" host="localhost" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.839 [INFO][4233] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.844 [INFO][4233] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.846 [INFO][4233] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.850 [INFO][4233] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:13.876820 containerd[1503]: 2025-05-12 12:55:13.850 [INFO][4233] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" host="localhost" May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.851 [INFO][4233] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.856 [INFO][4233] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" host="localhost" May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4233] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" host="localhost" May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4233] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" host="localhost" May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:13.877057 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" HandleID="k8s-pod-network.69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Workload="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.878228 containerd[1503]: 2025-05-12 12:55:13.863 [INFO][4184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0", GenerateName:"calico-apiserver-5f97f8bfd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9100eda0-fbd2-4ecb-b7dd-888b6a593939", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8bfd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f97f8bfd5-wzcg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1315288ffd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:13.878301 containerd[1503]: 2025-05-12 12:55:13.863 [INFO][4184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.878301 containerd[1503]: 2025-05-12 12:55:13.863 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1315288ffd1 ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.878301 containerd[1503]: 2025-05-12 12:55:13.866 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.878368 containerd[1503]: 2025-05-12 12:55:13.866 [INFO][4184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0", GenerateName:"calico-apiserver-5f97f8bfd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9100eda0-fbd2-4ecb-b7dd-888b6a593939", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f97f8bfd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e", Pod:"calico-apiserver-5f97f8bfd5-wzcg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1315288ffd1", MAC:"1e:be:44:93:55:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:13.878470 containerd[1503]: 2025-05-12 12:55:13.874 [INFO][4184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" Namespace="calico-apiserver" Pod="calico-apiserver-5f97f8bfd5-wzcg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f97f8bfd5--wzcg4-eth0" May 12 12:55:13.900296 containerd[1503]: time="2025-05-12T12:55:13.900258243Z" level=info msg="connecting to shim 69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e" address="unix:///run/containerd/s/125129d04f4e4eb3037f45390a4d0a5d8219bc5764d74ab3f4ff80dccda1723c" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:13.927113 systemd[1]: Started cri-containerd-69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e.scope - libcontainer container 69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e. May 12 12:55:13.939102 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:13.962954 containerd[1503]: time="2025-05-12T12:55:13.962904354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f97f8bfd5-wzcg4,Uid:9100eda0-fbd2-4ecb-b7dd-888b6a593939,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e\"" May 12 12:55:13.967074 systemd-networkd[1440]: cali3f6aa4620a5: Link UP May 12 12:55:13.967734 systemd-networkd[1440]: cali3f6aa4620a5: Gained carrier May 12 12:55:13.978082 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL May 12 12:55:13.980350 containerd[1503]: 2025-05-12 12:55:13.785 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0 coredns-668d6bf9bc- kube-system 54efad5f-19f5-42cb-8b91-b8f69a7b84ed 682 0 2025-05-12 12:54:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d9lnq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f6aa4620a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-" May 12 12:55:13.980350 containerd[1503]: 2025-05-12 12:55:13.785 [INFO][4194] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.980350 containerd[1503]: 2025-05-12 12:55:13.812 [INFO][4246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" HandleID="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Workload="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.830 [INFO][4246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" HandleID="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Workload="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d9lnq", "timestamp":"2025-05-12 12:55:13.812954227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.830 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.861 [INFO][4246] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.931 [INFO][4246] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" host="localhost" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.936 [INFO][4246] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.943 [INFO][4246] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.945 [INFO][4246] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.947 [INFO][4246] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:13.981145 containerd[1503]: 2025-05-12 12:55:13.947 [INFO][4246] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" host="localhost" May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.949 [INFO][4246] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5 May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.953 [INFO][4246] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" host="localhost" May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.959 [INFO][4246] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" host="localhost" May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.960 [INFO][4246] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" host="localhost" May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.960 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:13.981419 containerd[1503]: 2025-05-12 12:55:13.960 [INFO][4246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" HandleID="k8s-pod-network.e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Workload="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.981563 containerd[1503]: 2025-05-12 12:55:13.962 [INFO][4194] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"54efad5f-19f5-42cb-8b91-b8f69a7b84ed", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d9lnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f6aa4620a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:13.981631 containerd[1503]: 2025-05-12 12:55:13.963 [INFO][4194] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.981631 containerd[1503]: 2025-05-12 12:55:13.963 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f6aa4620a5 ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.981631 containerd[1503]: 2025-05-12 12:55:13.968 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:13.981728 containerd[1503]: 2025-05-12 12:55:13.969 [INFO][4194] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"54efad5f-19f5-42cb-8b91-b8f69a7b84ed", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5", Pod:"coredns-668d6bf9bc-d9lnq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f6aa4620a5", MAC:"06:e5:a2:b8:65:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:13.981728 containerd[1503]: 2025-05-12 12:55:13.977 [INFO][4194] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-d9lnq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d9lnq-eth0" May 12 12:55:14.008026 containerd[1503]: time="2025-05-12T12:55:14.007959253Z" level=info msg="connecting to shim e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5" address="unix:///run/containerd/s/708c623e9eaf8f68bf3a488457b256169cdb375bfe5a41ea2099d3c7f368d0b9" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:14.033018 systemd[1]: Started cri-containerd-e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5.scope - libcontainer container e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5. May 12 12:55:14.049216 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:14.065403 systemd-networkd[1440]: cali22c30393496: Link UP May 12 12:55:14.065795 systemd-networkd[1440]: cali22c30393496: Gained carrier May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.780 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bhnrs-eth0 csi-node-driver- calico-system 03a90fea-3da3-425f-9a3a-c6653e1060a7 585 0 2025-05-12 12:54:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bhnrs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22c30393496 [] []}} ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.780 [INFO][4208] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.819 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" HandleID="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Workload="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.833 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" HandleID="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Workload="localhost-k8s-csi--node--driver--bhnrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3d70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bhnrs", "timestamp":"2025-05-12 12:55:13.819982685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.833 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.960 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:13.960 [INFO][4239] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.031 [INFO][4239] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.037 [INFO][4239] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.044 [INFO][4239] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.046 [INFO][4239] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.048 [INFO][4239] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.048 [INFO][4239] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.050 [INFO][4239] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4 May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.053 [INFO][4239] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.059 [INFO][4239] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.059 [INFO][4239] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" host="localhost" May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.059 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:14.081533 containerd[1503]: 2025-05-12 12:55:14.059 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" HandleID="k8s-pod-network.ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Workload="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.061 [INFO][4208] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhnrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03a90fea-3da3-425f-9a3a-c6653e1060a7", ResourceVersion:"585", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bhnrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22c30393496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.061 [INFO][4208] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.061 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22c30393496 ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.066 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.066 [INFO][4208] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhnrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03a90fea-3da3-425f-9a3a-c6653e1060a7", ResourceVersion:"585", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4", Pod:"csi-node-driver-bhnrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22c30393496", MAC:"8e:8f:d7:73:65:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:14.082254 containerd[1503]: 2025-05-12 12:55:14.078 [INFO][4208] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" Namespace="calico-system" Pod="csi-node-driver-bhnrs" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhnrs-eth0" May 12 12:55:14.084733 containerd[1503]: time="2025-05-12T12:55:14.084698872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d9lnq,Uid:54efad5f-19f5-42cb-8b91-b8f69a7b84ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5\"" May 12 12:55:14.085930 kubelet[2619]: E0512 12:55:14.085529 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:14.088238 containerd[1503]: time="2025-05-12T12:55:14.088207319Z" level=info msg="CreateContainer within sandbox \"e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 12:55:14.100189 containerd[1503]: time="2025-05-12T12:55:14.100159774Z" level=info msg="Container 4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:14.102392 containerd[1503]: time="2025-05-12T12:55:14.102360629Z" level=info msg="connecting to shim ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4" address="unix:///run/containerd/s/b70698305b2e563d2d5be35971e79323fe5719fb42fa87ab5c69b00a2ea202b5" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:14.104789 containerd[1503]: time="2025-05-12T12:55:14.104757208Z" level=info msg="CreateContainer within sandbox \"e680aeb1d7da05d20ad644f440d71536b6716ff797d5d7939eaf87f0fa3bedb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189\"" May 12 12:55:14.105314 containerd[1503]: time="2025-05-12T12:55:14.105285421Z" level=info msg="StartContainer for \"4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189\"" May 12 12:55:14.107521 containerd[1503]: time="2025-05-12T12:55:14.107448155Z" level=info msg="connecting to shim 4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189" address="unix:///run/containerd/s/708c623e9eaf8f68bf3a488457b256169cdb375bfe5a41ea2099d3c7f368d0b9" protocol=ttrpc version=3 May 12 12:55:14.130076 systemd[1]: Started cri-containerd-4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189.scope - libcontainer container 4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189. May 12 12:55:14.131178 systemd[1]: Started cri-containerd-ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4.scope - libcontainer container ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4. May 12 12:55:14.143785 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:14.158777 containerd[1503]: time="2025-05-12T12:55:14.158693743Z" level=info msg="StartContainer for \"4214f2cf0e646cdff4a431121139b88972a138a51531015728c1d9b52b620189\" returns successfully" May 12 12:55:14.159831 containerd[1503]: time="2025-05-12T12:55:14.159738888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhnrs,Uid:03a90fea-3da3-425f-9a3a-c6653e1060a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4\"" May 12 12:55:14.867777 kubelet[2619]: E0512 12:55:14.867700 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:14.880667 kubelet[2619]: I0512 12:55:14.879266 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d9lnq" podStartSLOduration=37.87925285 podStartE2EDuration="37.87925285s" podCreationTimestamp="2025-05-12 12:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:55:14.878931882 +0000 UTC m=+42.252256421" watchObservedRunningTime="2025-05-12 12:55:14.87925285 +0000 UTC m=+42.252577389" May 12 12:55:15.385994 systemd-networkd[1440]: cali1315288ffd1: Gained IPv6LL May 12 12:55:15.450010 systemd-networkd[1440]: cali3f6aa4620a5: Gained IPv6LL May 12 12:55:15.720700 kubelet[2619]: E0512 12:55:15.720604 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:15.721200 containerd[1503]: time="2025-05-12T12:55:15.721035641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566cf88f5c-jz7rv,Uid:56d5ae46-32c0-49a5-b52c-500efd0dd389,Namespace:calico-system,Attempt:0,}" May 12 12:55:15.721627 containerd[1503]: time="2025-05-12T12:55:15.721322688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b5m6q,Uid:4f83f5f2-a16e-4509-bf34-7c2d09c96729,Namespace:kube-system,Attempt:0,}" May 12 12:55:15.837093 systemd-networkd[1440]: calia5200c1806c: Link UP May 12 12:55:15.837454 systemd-networkd[1440]: calia5200c1806c: Gained carrier May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.767 [INFO][4490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0 coredns-668d6bf9bc- kube-system 4f83f5f2-a16e-4509-bf34-7c2d09c96729 678 0 2025-05-12 12:54:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-b5m6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5200c1806c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.767 [INFO][4490] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.797 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" HandleID="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Workload="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.808 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" HandleID="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Workload="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000306c50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-b5m6q", "timestamp":"2025-05-12 12:55:15.797417765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.808 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.808 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.808 [INFO][4511] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.811 [INFO][4511] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.814 [INFO][4511] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.817 [INFO][4511] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.819 [INFO][4511] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.821 [INFO][4511] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.821 [INFO][4511] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.823 [INFO][4511] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41 May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.826 [INFO][4511] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4511] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4511] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" host="localhost" May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:15.848485 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" HandleID="k8s-pod-network.4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Workload="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.834 [INFO][4490] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f83f5f2-a16e-4509-bf34-7c2d09c96729", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-b5m6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5200c1806c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.834 [INFO][4490] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.835 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5200c1806c ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.837 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.838 [INFO][4490] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f83f5f2-a16e-4509-bf34-7c2d09c96729", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41", Pod:"coredns-668d6bf9bc-b5m6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5200c1806c", MAC:"1a:aa:73:21:1d:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:15.849108 containerd[1503]: 2025-05-12 12:55:15.846 [INFO][4490] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" Namespace="kube-system" Pod="coredns-668d6bf9bc-b5m6q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b5m6q-eth0" May 12 12:55:15.870657 containerd[1503]: time="2025-05-12T12:55:15.870620131Z" level=info msg="connecting to shim 4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41" address="unix:///run/containerd/s/6d49086ee3deae6294954a9aae845751bc55fa47a35af07fcc9854cab694c5d1" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:15.883790 kubelet[2619]: E0512 12:55:15.883757 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:15.899006 systemd[1]: Started cri-containerd-4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41.scope - libcontainer container 4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41. May 12 12:55:15.909112 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:15.932505 containerd[1503]: time="2025-05-12T12:55:15.931811288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b5m6q,Uid:4f83f5f2-a16e-4509-bf34-7c2d09c96729,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41\"" May 12 12:55:15.933389 kubelet[2619]: E0512 12:55:15.933339 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:15.937564 containerd[1503]: time="2025-05-12T12:55:15.937529626Z" level=info msg="CreateContainer within sandbox \"4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 12:55:15.946774 systemd-networkd[1440]: cali75151685e7a: Link UP May 12 12:55:15.946962 systemd-networkd[1440]: cali75151685e7a: Gained carrier May 12 12:55:15.953270 containerd[1503]: time="2025-05-12T12:55:15.953221885Z" level=info msg="Container b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:15.962038 systemd-networkd[1440]: cali22c30393496: Gained IPv6LL May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.765 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0 calico-kube-controllers-566cf88f5c- calico-system 56d5ae46-32c0-49a5-b52c-500efd0dd389 679 0 2025-05-12 12:54:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:566cf88f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-566cf88f5c-jz7rv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali75151685e7a [] []}} ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.765 [INFO][4480] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.798 [INFO][4509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" HandleID="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Workload="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.815 [INFO][4509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" HandleID="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Workload="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a3c30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-566cf88f5c-jz7rv", "timestamp":"2025-05-12 12:55:15.798292746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.815 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.832 [INFO][4509] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.912 [INFO][4509] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.916 [INFO][4509] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.922 [INFO][4509] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.923 [INFO][4509] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.925 [INFO][4509] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.925 [INFO][4509] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.928 [INFO][4509] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.932 [INFO][4509] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.941 [INFO][4509] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.942 [INFO][4509] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" host="localhost" May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.942 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 12:55:15.964211 containerd[1503]: 2025-05-12 12:55:15.942 [INFO][4509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" HandleID="k8s-pod-network.dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Workload="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.944 [INFO][4480] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0", GenerateName:"calico-kube-controllers-566cf88f5c-", Namespace:"calico-system", SelfLink:"", UID:"56d5ae46-32c0-49a5-b52c-500efd0dd389", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566cf88f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-566cf88f5c-jz7rv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75151685e7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.944 [INFO][4480] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.944 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75151685e7a ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.947 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.948 [INFO][4480] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0", GenerateName:"calico-kube-controllers-566cf88f5c-", Namespace:"calico-system", SelfLink:"", UID:"56d5ae46-32c0-49a5-b52c-500efd0dd389", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 12, 54, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566cf88f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf", Pod:"calico-kube-controllers-566cf88f5c-jz7rv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75151685e7a", MAC:"f2:02:5a:1c:5c:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 12:55:15.964678 containerd[1503]: 2025-05-12 12:55:15.960 [INFO][4480] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" Namespace="calico-system" Pod="calico-kube-controllers-566cf88f5c-jz7rv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566cf88f5c--jz7rv-eth0" May 12 12:55:15.966572 containerd[1503]: time="2025-05-12T12:55:15.966318001Z" level=info msg="CreateContainer within sandbox \"4bcaa9fe5f83a5769734341c053fc29e1476350b5900fe037aac8fffa3860a41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed\"" May 12 12:55:15.966827 containerd[1503]: time="2025-05-12T12:55:15.966804053Z" level=info msg="StartContainer for \"b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed\"" May 12 12:55:15.967827 containerd[1503]: time="2025-05-12T12:55:15.967779956Z" level=info msg="connecting to shim b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed" address="unix:///run/containerd/s/6d49086ee3deae6294954a9aae845751bc55fa47a35af07fcc9854cab694c5d1" protocol=ttrpc version=3 May 12 12:55:15.985146 containerd[1503]: time="2025-05-12T12:55:15.984063749Z" level=info msg="connecting to shim dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf" address="unix:///run/containerd/s/ca5960f2db5ab10c9b6679fcd2cdbba784df648615cf57290e44e53bc20267a5" namespace=k8s.io protocol=ttrpc version=3 May 12 12:55:15.986168 systemd[1]: Started cri-containerd-b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed.scope - libcontainer container b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed. May 12 12:55:16.007061 systemd[1]: Started cri-containerd-dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf.scope - libcontainer container dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf. May 12 12:55:16.014042 containerd[1503]: time="2025-05-12T12:55:16.014010144Z" level=info msg="StartContainer for \"b5baa2c562d21bf4a48d24756024545106c40b3966a0e4c9e404c9f4604e00ed\" returns successfully" May 12 12:55:16.020313 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 12:55:16.043534 containerd[1503]: time="2025-05-12T12:55:16.043501279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566cf88f5c-jz7rv,Uid:56d5ae46-32c0-49a5-b52c-500efd0dd389,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf\"" May 12 12:55:16.858026 systemd-networkd[1440]: calia5200c1806c: Gained IPv6LL May 12 12:55:16.880079 kubelet[2619]: E0512 12:55:16.880047 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:16.882519 kubelet[2619]: E0512 12:55:16.882198 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:16.901078 kubelet[2619]: I0512 12:55:16.901016 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b5m6q" podStartSLOduration=39.900993007 podStartE2EDuration="39.900993007s" podCreationTimestamp="2025-05-12 12:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 12:55:16.900552997 +0000 UTC m=+44.273877536" watchObservedRunningTime="2025-05-12 12:55:16.900993007 +0000 UTC m=+44.274317546" May 12 12:55:17.305970 systemd-networkd[1440]: cali75151685e7a: Gained IPv6LL May 12 12:55:17.884514 kubelet[2619]: E0512 12:55:17.884428 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:18.176561 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:51766.service - OpenSSH per-connection server daemon (10.0.0.1:51766). May 12 12:55:18.230082 sshd[4688]: Accepted publickey for core from 10.0.0.1 port 51766 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:18.231524 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:18.235280 systemd-logind[1493]: New session 11 of user core. May 12 12:55:18.245070 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 12:55:18.355569 sshd[4690]: Connection closed by 10.0.0.1 port 51766 May 12 12:55:18.354916 sshd-session[4688]: pam_unix(sshd:session): session closed for user core May 12 12:55:18.358333 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:51766.service: Deactivated successfully. May 12 12:55:18.360583 systemd[1]: session-11.scope: Deactivated successfully. May 12 12:55:18.362369 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. May 12 12:55:18.363796 systemd-logind[1493]: Removed session 11. May 12 12:55:18.886335 kubelet[2619]: E0512 12:55:18.886301 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:23.377078 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:37162.service - OpenSSH per-connection server daemon (10.0.0.1:37162). May 12 12:55:23.436085 sshd[4719]: Accepted publickey for core from 10.0.0.1 port 37162 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:23.442323 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:23.450517 systemd-logind[1493]: New session 12 of user core. May 12 12:55:23.460021 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 12:55:23.628905 sshd[4721]: Connection closed by 10.0.0.1 port 37162 May 12 12:55:23.629491 sshd-session[4719]: pam_unix(sshd:session): session closed for user core May 12 12:55:23.638236 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:37162.service: Deactivated successfully. May 12 12:55:23.641208 systemd[1]: session-12.scope: Deactivated successfully. May 12 12:55:23.642038 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. May 12 12:55:23.645105 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:37166.service - OpenSSH per-connection server daemon (10.0.0.1:37166). May 12 12:55:23.646353 systemd-logind[1493]: Removed session 12. May 12 12:55:23.685067 containerd[1503]: time="2025-05-12T12:55:23.685019063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:23.685596 containerd[1503]: time="2025-05-12T12:55:23.685569994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 12 12:55:23.692317 containerd[1503]: time="2025-05-12T12:55:23.692282771Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:23.692877 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 37166 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:23.694630 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:23.695527 containerd[1503]: time="2025-05-12T12:55:23.695456476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:23.696455 containerd[1503]: time="2025-05-12T12:55:23.696064289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 11.517036792s" May 12 12:55:23.696455 containerd[1503]: time="2025-05-12T12:55:23.696094729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 12 12:55:23.697498 containerd[1503]: time="2025-05-12T12:55:23.697446677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 12:55:23.700901 systemd-logind[1493]: New session 13 of user core. May 12 12:55:23.709036 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 12:55:23.711851 containerd[1503]: time="2025-05-12T12:55:23.711808491Z" level=info msg="CreateContainer within sandbox \"f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 12:55:23.739218 containerd[1503]: time="2025-05-12T12:55:23.739164331Z" level=info msg="Container fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:23.750547 containerd[1503]: time="2025-05-12T12:55:23.750436401Z" level=info msg="CreateContainer within sandbox \"f6ee21d65bd14135f1fdc3ef2da6ad9ef56245bb51befe7ca16d7559f0c7ab90\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e\"" May 12 12:55:23.751181 containerd[1503]: time="2025-05-12T12:55:23.751145936Z" level=info msg="StartContainer for \"fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e\"" May 12 12:55:23.752404 containerd[1503]: time="2025-05-12T12:55:23.752374361Z" level=info msg="connecting to shim fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e" address="unix:///run/containerd/s/57c988ba88fcf8aa326bde034e8c0b19efb727c6b465411dea1c5fe098da1775" protocol=ttrpc version=3 May 12 12:55:23.778248 systemd[1]: Started cri-containerd-fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e.scope - libcontainer container fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e. May 12 12:55:23.814711 containerd[1503]: time="2025-05-12T12:55:23.814605715Z" level=info msg="StartContainer for \"fbd025e5f516c0fb40b256acbffe8619ca525c975eb3104705dcb07b07b0bd7e\" returns successfully" May 12 12:55:23.898784 sshd[4741]: Connection closed by 10.0.0.1 port 37166 May 12 12:55:23.901331 sshd-session[4735]: pam_unix(sshd:session): session closed for user core May 12 12:55:23.919246 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:37166.service: Deactivated successfully. May 12 12:55:23.919852 kubelet[2619]: I0512 12:55:23.919655 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-b48bt" podStartSLOduration=28.400771311 podStartE2EDuration="39.919638424s" podCreationTimestamp="2025-05-12 12:54:44 +0000 UTC" firstStartedPulling="2025-05-12 12:55:12.178443121 +0000 UTC m=+39.551767660" lastFinishedPulling="2025-05-12 12:55:23.697310234 +0000 UTC m=+51.070634773" observedRunningTime="2025-05-12 12:55:23.918980011 +0000 UTC m=+51.292304550" watchObservedRunningTime="2025-05-12 12:55:23.919638424 +0000 UTC m=+51.292962963" May 12 12:55:23.920827 systemd[1]: session-13.scope: Deactivated successfully. May 12 12:55:23.924408 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. May 12 12:55:23.928332 systemd-logind[1493]: Removed session 13. May 12 12:55:23.932203 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:37174.service - OpenSSH per-connection server daemon (10.0.0.1:37174). May 12 12:55:23.959872 containerd[1503]: time="2025-05-12T12:55:23.959386117Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:23.960995 containerd[1503]: time="2025-05-12T12:55:23.960967790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 12 12:55:23.962480 containerd[1503]: time="2025-05-12T12:55:23.962455940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 264.975983ms" May 12 12:55:23.962598 containerd[1503]: time="2025-05-12T12:55:23.962583063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 12 12:55:23.964336 containerd[1503]: time="2025-05-12T12:55:23.964162855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 12 12:55:23.966373 containerd[1503]: time="2025-05-12T12:55:23.966317099Z" level=info msg="CreateContainer within sandbox \"69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 12:55:23.973253 containerd[1503]: time="2025-05-12T12:55:23.972974276Z" level=info msg="Container 2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:23.981477 containerd[1503]: time="2025-05-12T12:55:23.981445129Z" level=info msg="CreateContainer within sandbox \"69355d1293537e5af5229b660927073f101c61007963043bd52fdc4b62a5bc2e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a\"" May 12 12:55:23.982540 containerd[1503]: time="2025-05-12T12:55:23.982121703Z" level=info msg="StartContainer for \"2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a\"" May 12 12:55:23.982585 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 37174 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:23.983556 containerd[1503]: time="2025-05-12T12:55:23.983507331Z" level=info msg="connecting to shim 2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a" address="unix:///run/containerd/s/125129d04f4e4eb3037f45390a4d0a5d8219bc5764d74ab3f4ff80dccda1723c" protocol=ttrpc version=3 May 12 12:55:23.984118 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:23.989690 systemd-logind[1493]: New session 14 of user core. May 12 12:55:23.993005 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 12:55:24.004021 systemd[1]: Started cri-containerd-2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a.scope - libcontainer container 2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a. May 12 12:55:24.045338 containerd[1503]: time="2025-05-12T12:55:24.045276940Z" level=info msg="StartContainer for \"2d75c647ee397d7d1637c9d6dcd8c4ba301180ff7da178813ed3ac3765a0d67a\" returns successfully" May 12 12:55:24.166954 sshd[4802]: Connection closed by 10.0.0.1 port 37174 May 12 12:55:24.166694 sshd-session[4785]: pam_unix(sshd:session): session closed for user core May 12 12:55:24.169936 systemd[1]: session-14.scope: Deactivated successfully. May 12 12:55:24.173074 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. May 12 12:55:24.173204 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:37174.service: Deactivated successfully. May 12 12:55:24.175893 systemd-logind[1493]: Removed session 14. May 12 12:55:24.905874 kubelet[2619]: I0512 12:55:24.905814 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 12:55:24.931983 kubelet[2619]: I0512 12:55:24.931438 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f97f8bfd5-wzcg4" podStartSLOduration=30.932006156 podStartE2EDuration="40.931418014s" podCreationTimestamp="2025-05-12 12:54:44 +0000 UTC" firstStartedPulling="2025-05-12 12:55:13.964095264 +0000 UTC m=+41.337419803" lastFinishedPulling="2025-05-12 12:55:23.963507122 +0000 UTC m=+51.336831661" observedRunningTime="2025-05-12 12:55:24.916896882 +0000 UTC m=+52.290221421" watchObservedRunningTime="2025-05-12 12:55:24.931418014 +0000 UTC m=+52.304742513" May 12 12:55:26.192149 containerd[1503]: time="2025-05-12T12:55:26.192104351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:26.193031 containerd[1503]: time="2025-05-12T12:55:26.192574960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 12 12:55:26.193396 containerd[1503]: time="2025-05-12T12:55:26.193350055Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:26.195289 containerd[1503]: time="2025-05-12T12:55:26.195265973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:26.195983 containerd[1503]: time="2025-05-12T12:55:26.195899425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 2.231704409s" May 12 12:55:26.195983 containerd[1503]: time="2025-05-12T12:55:26.195929866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 12 12:55:26.197052 containerd[1503]: time="2025-05-12T12:55:26.197027287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 12 12:55:26.198140 containerd[1503]: time="2025-05-12T12:55:26.198085548Z" level=info msg="CreateContainer within sandbox \"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 12 12:55:26.206721 containerd[1503]: time="2025-05-12T12:55:26.206419710Z" level=info msg="Container 3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:26.214582 containerd[1503]: time="2025-05-12T12:55:26.214463387Z" level=info msg="CreateContainer within sandbox \"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377\"" May 12 12:55:26.215264 containerd[1503]: time="2025-05-12T12:55:26.214927716Z" level=info msg="StartContainer for \"3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377\"" May 12 12:55:26.216281 containerd[1503]: time="2025-05-12T12:55:26.216254702Z" level=info msg="connecting to shim 3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377" address="unix:///run/containerd/s/b70698305b2e563d2d5be35971e79323fe5719fb42fa87ab5c69b00a2ea202b5" protocol=ttrpc version=3 May 12 12:55:26.245098 systemd[1]: Started cri-containerd-3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377.scope - libcontainer container 3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377. May 12 12:55:26.298370 containerd[1503]: time="2025-05-12T12:55:26.298330583Z" level=info msg="StartContainer for \"3f084ad36667a050f11b7317b600f37be1e7b0cdbcef212e0314bd01baf06377\" returns successfully" May 12 12:55:29.182510 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). May 12 12:55:29.239657 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:29.241190 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:29.246411 systemd-logind[1493]: New session 15 of user core. May 12 12:55:29.258035 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 12:55:29.441447 sshd[4883]: Connection closed by 10.0.0.1 port 37182 May 12 12:55:29.441537 sshd-session[4881]: pam_unix(sshd:session): session closed for user core May 12 12:55:29.445267 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:37182.service: Deactivated successfully. May 12 12:55:29.448747 systemd[1]: session-15.scope: Deactivated successfully. May 12 12:55:29.449540 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. May 12 12:55:29.450656 systemd-logind[1493]: Removed session 15. May 12 12:55:30.343855 containerd[1503]: time="2025-05-12T12:55:30.343796591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:30.344474 containerd[1503]: time="2025-05-12T12:55:30.344431523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 12 12:55:30.345121 containerd[1503]: time="2025-05-12T12:55:30.345096095Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:30.347324 containerd[1503]: time="2025-05-12T12:55:30.347286296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:30.347675 containerd[1503]: time="2025-05-12T12:55:30.347642542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 4.150436452s" May 12 12:55:30.347675 containerd[1503]: time="2025-05-12T12:55:30.347671943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 12 12:55:30.356207 containerd[1503]: time="2025-05-12T12:55:30.355699171Z" level=info msg="CreateContainer within sandbox \"dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 12 12:55:30.362931 containerd[1503]: time="2025-05-12T12:55:30.362545378Z" level=info msg="Container f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:30.363118 containerd[1503]: time="2025-05-12T12:55:30.363097628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 12 12:55:30.369437 containerd[1503]: time="2025-05-12T12:55:30.369405065Z" level=info msg="CreateContainer within sandbox \"dbe14c0c6bdd1b523c73eb9bfb7ea8e570fde81683c7b978f7bbd6de590c51cf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b\"" May 12 12:55:30.369964 containerd[1503]: time="2025-05-12T12:55:30.369942354Z" level=info msg="StartContainer for \"f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b\"" May 12 12:55:30.371429 containerd[1503]: time="2025-05-12T12:55:30.371390781Z" level=info msg="connecting to shim f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b" address="unix:///run/containerd/s/ca5960f2db5ab10c9b6679fcd2cdbba784df648615cf57290e44e53bc20267a5" protocol=ttrpc version=3 May 12 12:55:30.392017 systemd[1]: Started cri-containerd-f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b.scope - libcontainer container f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b. May 12 12:55:30.429763 containerd[1503]: time="2025-05-12T12:55:30.428069069Z" level=info msg="StartContainer for \"f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b\" returns successfully" May 12 12:55:30.937859 kubelet[2619]: I0512 12:55:30.937735 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-566cf88f5c-jz7rv" podStartSLOduration=31.633828322 podStartE2EDuration="45.937721733s" podCreationTimestamp="2025-05-12 12:54:45 +0000 UTC" firstStartedPulling="2025-05-12 12:55:16.044912993 +0000 UTC m=+43.418237492" lastFinishedPulling="2025-05-12 12:55:30.348806364 +0000 UTC m=+57.722130903" observedRunningTime="2025-05-12 12:55:30.937489729 +0000 UTC m=+58.310814268" watchObservedRunningTime="2025-05-12 12:55:30.937721733 +0000 UTC m=+58.311046272" May 12 12:55:30.976468 containerd[1503]: time="2025-05-12T12:55:30.976433049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b\" id:\"7e354d5b9543c0a9f8d9702a893782a3f08938b664b5d61c666db03f088a519e\" pid:4945 exited_at:{seconds:1747054530 nanos:975943560}" May 12 12:55:32.376423 containerd[1503]: time="2025-05-12T12:55:32.376377924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:32.376799 containerd[1503]: time="2025-05-12T12:55:32.376765611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 12 12:55:32.377592 containerd[1503]: time="2025-05-12T12:55:32.377567385Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:32.383559 containerd[1503]: time="2025-05-12T12:55:32.383521013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 12:55:32.384118 containerd[1503]: time="2025-05-12T12:55:32.384077623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.020411665s" May 12 12:55:32.384148 containerd[1503]: time="2025-05-12T12:55:32.384116784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 12 12:55:32.387939 containerd[1503]: time="2025-05-12T12:55:32.387895972Z" level=info msg="CreateContainer within sandbox \"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 12 12:55:32.394823 containerd[1503]: time="2025-05-12T12:55:32.394704935Z" level=info msg="Container 9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb: CDI devices from CRI Config.CDIDevices: []" May 12 12:55:32.403437 containerd[1503]: time="2025-05-12T12:55:32.403349091Z" level=info msg="CreateContainer within sandbox \"ec37de464254278e4d3559136917a81cf57fe53ae5864be61e21cff822af24d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb\"" May 12 12:55:32.404124 containerd[1503]: time="2025-05-12T12:55:32.403950982Z" level=info msg="StartContainer for \"9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb\"" May 12 12:55:32.407343 containerd[1503]: time="2025-05-12T12:55:32.407273842Z" level=info msg="connecting to shim 9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb" address="unix:///run/containerd/s/b70698305b2e563d2d5be35971e79323fe5719fb42fa87ab5c69b00a2ea202b5" protocol=ttrpc version=3 May 12 12:55:32.429999 systemd[1]: Started cri-containerd-9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb.scope - libcontainer container 9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb. May 12 12:55:32.467824 containerd[1503]: time="2025-05-12T12:55:32.467791016Z" level=info msg="StartContainer for \"9baa672ee9c7fa8097ea38105521f76cffab8e80439122a63a35dc6712a106eb\" returns successfully" May 12 12:55:32.802528 kubelet[2619]: I0512 12:55:32.802447 2619 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 12 12:55:32.804000 kubelet[2619]: I0512 12:55:32.803980 2619 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 12 12:55:34.453952 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:35388.service - OpenSSH per-connection server daemon (10.0.0.1:35388). May 12 12:55:34.507733 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 35388 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:34.509018 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:34.513533 systemd-logind[1493]: New session 16 of user core. May 12 12:55:34.528967 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 12:55:34.713242 sshd[5005]: Connection closed by 10.0.0.1 port 35388 May 12 12:55:34.713896 sshd-session[5003]: pam_unix(sshd:session): session closed for user core May 12 12:55:34.717214 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:35388.service: Deactivated successfully. May 12 12:55:34.719356 systemd[1]: session-16.scope: Deactivated successfully. May 12 12:55:34.720251 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. May 12 12:55:34.722619 systemd-logind[1493]: Removed session 16. May 12 12:55:38.880207 containerd[1503]: time="2025-05-12T12:55:38.880158700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\" id:\"382422f7bd3ddc7ae4fea208b44f78da40d3e708b768b149aa897e3da1626255\" pid:5035 exited_at:{seconds:1747054538 nanos:879803735}" May 12 12:55:38.882046 kubelet[2619]: E0512 12:55:38.881971 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:38.895584 kubelet[2619]: I0512 12:55:38.895515 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bhnrs" podStartSLOduration=35.672768419 podStartE2EDuration="53.895498268s" podCreationTimestamp="2025-05-12 12:54:45 +0000 UTC" firstStartedPulling="2025-05-12 12:55:14.16220695 +0000 UTC m=+41.535531489" lastFinishedPulling="2025-05-12 12:55:32.384936799 +0000 UTC m=+59.758261338" observedRunningTime="2025-05-12 12:55:32.943896701 +0000 UTC m=+60.317221240" watchObservedRunningTime="2025-05-12 12:55:38.895498268 +0000 UTC m=+66.268822807" May 12 12:55:39.731884 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). May 12 12:55:39.785584 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:39.786892 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:39.790520 systemd-logind[1493]: New session 17 of user core. May 12 12:55:39.804984 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 12:55:39.934835 sshd[5051]: Connection closed by 10.0.0.1 port 35402 May 12 12:55:39.935329 sshd-session[5049]: pam_unix(sshd:session): session closed for user core May 12 12:55:39.938116 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:35402.service: Deactivated successfully. May 12 12:55:39.939755 systemd[1]: session-17.scope: Deactivated successfully. May 12 12:55:39.941172 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. May 12 12:55:39.942827 systemd-logind[1493]: Removed session 17. May 12 12:55:39.954537 kubelet[2619]: I0512 12:55:39.954473 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 12:55:44.952121 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:38304.service - OpenSSH per-connection server daemon (10.0.0.1:38304). May 12 12:55:45.002034 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 38304 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:45.003349 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:45.007190 systemd-logind[1493]: New session 18 of user core. May 12 12:55:45.018001 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 12:55:45.150125 sshd[5069]: Connection closed by 10.0.0.1 port 38304 May 12 12:55:45.150559 sshd-session[5067]: pam_unix(sshd:session): session closed for user core May 12 12:55:45.153290 systemd[1]: session-18.scope: Deactivated successfully. May 12 12:55:45.155054 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:38304.service: Deactivated successfully. May 12 12:55:45.157626 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. May 12 12:55:45.158757 systemd-logind[1493]: Removed session 18. May 12 12:55:45.721338 kubelet[2619]: E0512 12:55:45.721308 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:50.166730 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:38312.service - OpenSSH per-connection server daemon (10.0.0.1:38312). May 12 12:55:50.224395 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 38312 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:50.225566 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:50.229382 systemd-logind[1493]: New session 19 of user core. May 12 12:55:50.236990 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 12:55:50.397427 sshd[5085]: Connection closed by 10.0.0.1 port 38312 May 12 12:55:50.397791 sshd-session[5083]: pam_unix(sshd:session): session closed for user core May 12 12:55:50.408423 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:38312.service: Deactivated successfully. May 12 12:55:50.410357 systemd[1]: session-19.scope: Deactivated successfully. May 12 12:55:50.412385 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. May 12 12:55:50.415469 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:38318.service - OpenSSH per-connection server daemon (10.0.0.1:38318). May 12 12:55:50.416256 systemd-logind[1493]: Removed session 19. May 12 12:55:50.471495 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 38318 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:50.472741 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:50.477787 systemd-logind[1493]: New session 20 of user core. May 12 12:55:50.481965 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 12:55:50.696362 sshd[5100]: Connection closed by 10.0.0.1 port 38318 May 12 12:55:50.696944 sshd-session[5098]: pam_unix(sshd:session): session closed for user core May 12 12:55:50.708499 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:38318.service: Deactivated successfully. May 12 12:55:50.710329 systemd[1]: session-20.scope: Deactivated successfully. May 12 12:55:50.711709 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. May 12 12:55:50.713942 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). May 12 12:55:50.714500 systemd-logind[1493]: Removed session 20. May 12 12:55:50.780891 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:50.782177 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:50.786545 systemd-logind[1493]: New session 21 of user core. May 12 12:55:50.796013 systemd[1]: Started session-21.scope - Session 21 of User core. May 12 12:55:51.485090 sshd[5114]: Connection closed by 10.0.0.1 port 38322 May 12 12:55:51.485674 sshd-session[5112]: pam_unix(sshd:session): session closed for user core May 12 12:55:51.494868 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:38322.service: Deactivated successfully. May 12 12:55:51.496362 systemd[1]: session-21.scope: Deactivated successfully. May 12 12:55:51.497710 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. May 12 12:55:51.504484 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:38326.service - OpenSSH per-connection server daemon (10.0.0.1:38326). May 12 12:55:51.506497 systemd-logind[1493]: Removed session 21. May 12 12:55:51.554153 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 38326 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:51.555309 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:51.559868 systemd-logind[1493]: New session 22 of user core. May 12 12:55:51.570047 systemd[1]: Started session-22.scope - Session 22 of User core. May 12 12:55:51.834470 sshd[5135]: Connection closed by 10.0.0.1 port 38326 May 12 12:55:51.835437 sshd-session[5133]: pam_unix(sshd:session): session closed for user core May 12 12:55:51.846728 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:38326.service: Deactivated successfully. May 12 12:55:51.849666 systemd[1]: session-22.scope: Deactivated successfully. May 12 12:55:51.853069 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. May 12 12:55:51.855070 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:38332.service - OpenSSH per-connection server daemon (10.0.0.1:38332). May 12 12:55:51.858463 systemd-logind[1493]: Removed session 22. May 12 12:55:51.907994 sshd[5146]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:51.909745 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:51.915941 systemd-logind[1493]: New session 23 of user core. May 12 12:55:51.925012 systemd[1]: Started session-23.scope - Session 23 of User core. May 12 12:55:52.042408 sshd[5148]: Connection closed by 10.0.0.1 port 38332 May 12 12:55:52.042914 sshd-session[5146]: pam_unix(sshd:session): session closed for user core May 12 12:55:52.046467 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:38332.service: Deactivated successfully. May 12 12:55:52.048268 systemd[1]: session-23.scope: Deactivated successfully. May 12 12:55:52.050619 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. May 12 12:55:52.052109 systemd-logind[1493]: Removed session 23. May 12 12:55:53.720475 kubelet[2619]: E0512 12:55:53.720434 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:55:57.055715 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:41048.service - OpenSSH per-connection server daemon (10.0.0.1:41048). May 12 12:55:57.113767 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 41048 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:55:57.115003 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:55:57.119027 systemd-logind[1493]: New session 24 of user core. May 12 12:55:57.131069 systemd[1]: Started session-24.scope - Session 24 of User core. May 12 12:55:57.251991 sshd[5173]: Connection closed by 10.0.0.1 port 41048 May 12 12:55:57.252323 sshd-session[5171]: pam_unix(sshd:session): session closed for user core May 12 12:55:57.255754 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:41048.service: Deactivated successfully. May 12 12:55:57.257875 systemd[1]: session-24.scope: Deactivated successfully. May 12 12:55:57.259444 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. May 12 12:55:57.260831 systemd-logind[1493]: Removed session 24. May 12 12:55:57.721144 kubelet[2619]: E0512 12:55:57.721093 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:56:00.951034 containerd[1503]: time="2025-05-12T12:56:00.950971188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f31e04cdcf79c5a557f00ff93512d8e5dd46c81a46cd253ce02477886c28a00b\" id:\"d7cf010dbab524bcacf76dc5995545743ca2e177cb90a337526deb1b7c6a164c\" pid:5199 exited_at:{seconds:1747054560 nanos:950746468}" May 12 12:56:02.267577 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:41056.service - OpenSSH per-connection server daemon (10.0.0.1:41056). May 12 12:56:02.306654 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 41056 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:56:02.307814 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:56:02.312107 systemd-logind[1493]: New session 25 of user core. May 12 12:56:02.321976 systemd[1]: Started session-25.scope - Session 25 of User core. May 12 12:56:02.434593 sshd[5213]: Connection closed by 10.0.0.1 port 41056 May 12 12:56:02.434934 sshd-session[5211]: pam_unix(sshd:session): session closed for user core May 12 12:56:02.438234 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:41056.service: Deactivated successfully. May 12 12:56:02.440037 systemd[1]: session-25.scope: Deactivated successfully. May 12 12:56:02.440928 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. May 12 12:56:02.442356 systemd-logind[1493]: Removed session 25. May 12 12:56:05.720365 kubelet[2619]: E0512 12:56:05.720325 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 12:56:07.453402 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:51906.service - OpenSSH per-connection server daemon (10.0.0.1:51906). May 12 12:56:07.508196 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 51906 ssh2: RSA SHA256:P0w5FDSdN9l4N13JShvA3TlfGNcQFCAqreD3HvxlUDQ May 12 12:56:07.509574 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 12:56:07.513942 systemd-logind[1493]: New session 26 of user core. May 12 12:56:07.524155 systemd[1]: Started session-26.scope - Session 26 of User core. May 12 12:56:07.675767 sshd[5228]: Connection closed by 10.0.0.1 port 51906 May 12 12:56:07.676487 sshd-session[5226]: pam_unix(sshd:session): session closed for user core May 12 12:56:07.680047 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:51906.service: Deactivated successfully. May 12 12:56:07.681754 systemd[1]: session-26.scope: Deactivated successfully. May 12 12:56:07.682564 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. May 12 12:56:07.683537 systemd-logind[1493]: Removed session 26. May 12 12:56:08.906022 containerd[1503]: time="2025-05-12T12:56:08.905969873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22affa3553f3fd036c3efed7d4a2bf72c4a035fd111cd078457ecbaa22b01419\" id:\"5fbf5df0bd250f1c755f0af886311df2bace8fd93bb61299fa19c94ea76a9121\" pid:5256 exited_at:{seconds:1747054568 nanos:905537713}"