May 16 09:41:59.818617 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 09:41:59.818637 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 16 08:35:42 -00 2025 May 16 09:41:59.818647 kernel: KASLR enabled May 16 09:41:59.818653 kernel: efi: EFI v2.7 by EDK II May 16 09:41:59.818658 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 16 09:41:59.818663 kernel: random: crng init done May 16 09:41:59.818670 kernel: secureboot: Secure boot disabled May 16 09:41:59.818676 kernel: ACPI: Early table checksum verification disabled May 16 09:41:59.818681 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 16 09:41:59.818688 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 09:41:59.818694 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818700 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818705 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818711 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818718 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818725 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818732 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818738 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818768 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 09:41:59.818775 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 09:41:59.818781 kernel: ACPI: Use ACPI SPCR as default console: Yes May 16 09:41:59.818787 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:41:59.818793 kernel: NODE_DATA(0) allocated [mem 0xdc964dc0-0xdc96bfff] May 16 09:41:59.818799 kernel: Zone ranges: May 16 09:41:59.818805 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:41:59.818814 kernel: DMA32 empty May 16 09:41:59.818820 kernel: Normal empty May 16 09:41:59.818826 kernel: Device empty May 16 09:41:59.818832 kernel: Movable zone start for each node May 16 09:41:59.818838 kernel: Early memory node ranges May 16 09:41:59.818844 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 16 09:41:59.818850 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 16 09:41:59.818856 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 16 09:41:59.818862 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 16 09:41:59.818868 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 16 09:41:59.818874 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 16 09:41:59.818880 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 16 09:41:59.818887 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 16 09:41:59.818893 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 16 09:41:59.818900 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 09:41:59.818908 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 09:41:59.818915 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 09:41:59.818921 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 09:41:59.818929 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 09:41:59.818935 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 09:41:59.818942 kernel: psci: probing for conduit method from ACPI. May 16 09:41:59.818948 kernel: psci: PSCIv1.1 detected in firmware. May 16 09:41:59.818955 kernel: psci: Using standard PSCI v0.2 function IDs May 16 09:41:59.818961 kernel: psci: Trusted OS migration not required May 16 09:41:59.818967 kernel: psci: SMC Calling Convention v1.1 May 16 09:41:59.818974 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 09:41:59.818980 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 16 09:41:59.818987 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 16 09:41:59.818995 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 09:41:59.819001 kernel: Detected PIPT I-cache on CPU0 May 16 09:41:59.819008 kernel: CPU features: detected: GIC system register CPU interface May 16 09:41:59.819014 kernel: CPU features: detected: Spectre-v4 May 16 09:41:59.819020 kernel: CPU features: detected: Spectre-BHB May 16 09:41:59.819027 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 09:41:59.819033 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 09:41:59.819040 kernel: CPU features: detected: ARM erratum 1418040 May 16 09:41:59.819046 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 09:41:59.819053 kernel: alternatives: applying boot alternatives May 16 09:41:59.819060 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efb8cca3b981587a1314d5462995d10283ca386e95a1cc1f8f2d642520bcc17 May 16 09:41:59.819068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 09:41:59.819075 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 09:41:59.819081 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 09:41:59.819088 kernel: Fallback order for Node 0: 0 May 16 09:41:59.819094 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 16 09:41:59.819100 kernel: Policy zone: DMA May 16 09:41:59.819113 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 09:41:59.819120 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 16 09:41:59.819126 kernel: software IO TLB: area num 4. May 16 09:41:59.819132 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 16 09:41:59.819139 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 16 09:41:59.819145 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 09:41:59.819153 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 09:41:59.819160 kernel: rcu: RCU event tracing is enabled. May 16 09:41:59.819167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 09:41:59.819174 kernel: Trampoline variant of Tasks RCU enabled. May 16 09:41:59.819180 kernel: Tracing variant of Tasks RCU enabled. May 16 09:41:59.819187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 09:41:59.819193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 09:41:59.819200 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 09:41:59.819206 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 09:41:59.819213 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 09:41:59.819219 kernel: GICv3: 256 SPIs implemented May 16 09:41:59.819227 kernel: GICv3: 0 Extended SPIs implemented May 16 09:41:59.819233 kernel: Root IRQ handler: gic_handle_irq May 16 09:41:59.819240 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 09:41:59.819246 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 16 09:41:59.819252 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 09:41:59.819259 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 09:41:59.819265 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 16 09:41:59.819272 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 16 09:41:59.819278 kernel: GICv3: using LPI property table @0x0000000040100000 May 16 09:41:59.819285 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 16 09:41:59.819291 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 09:41:59.819301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:41:59.819309 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 09:41:59.819315 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 09:41:59.819322 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 09:41:59.819328 kernel: arm-pv: using stolen time PV May 16 09:41:59.819338 kernel: Console: colour dummy device 80x25 May 16 09:41:59.819344 kernel: ACPI: Core revision 20240827 May 16 09:41:59.819351 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 09:41:59.819358 kernel: pid_max: default: 32768 minimum: 301 May 16 09:41:59.819365 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 09:41:59.819373 kernel: landlock: Up and running. May 16 09:41:59.819380 kernel: SELinux: Initializing. May 16 09:41:59.819386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 09:41:59.819393 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 09:41:59.819400 kernel: rcu: Hierarchical SRCU implementation. May 16 09:41:59.819406 kernel: rcu: Max phase no-delay instances is 400. May 16 09:41:59.819413 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 09:41:59.819420 kernel: Remapping and enabling EFI services. May 16 09:41:59.819426 kernel: smp: Bringing up secondary CPUs ... May 16 09:41:59.819433 kernel: Detected PIPT I-cache on CPU1 May 16 09:41:59.819446 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 09:41:59.819453 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 16 09:41:59.819461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:41:59.819468 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 09:41:59.819475 kernel: Detected PIPT I-cache on CPU2 May 16 09:41:59.819482 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 09:41:59.819489 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 16 09:41:59.819497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:41:59.819504 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 09:41:59.819511 kernel: Detected PIPT I-cache on CPU3 May 16 09:41:59.819518 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 09:41:59.819525 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 16 09:41:59.819532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 09:41:59.819538 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 09:41:59.819545 kernel: smp: Brought up 1 node, 4 CPUs May 16 09:41:59.819552 kernel: SMP: Total of 4 processors activated. May 16 09:41:59.819559 kernel: CPU: All CPU(s) started at EL1 May 16 09:41:59.819567 kernel: CPU features: detected: 32-bit EL0 Support May 16 09:41:59.819574 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 09:41:59.819581 kernel: CPU features: detected: Common not Private translations May 16 09:41:59.819588 kernel: CPU features: detected: CRC32 instructions May 16 09:41:59.819595 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 09:41:59.819602 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 09:41:59.819609 kernel: CPU features: detected: LSE atomic instructions May 16 09:41:59.819616 kernel: CPU features: detected: Privileged Access Never May 16 09:41:59.819623 kernel: CPU features: detected: RAS Extension Support May 16 09:41:59.819631 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 09:41:59.819638 kernel: alternatives: applying system-wide alternatives May 16 09:41:59.819645 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 16 09:41:59.819652 kernel: Memory: 2440980K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125540K reserved, 0K cma-reserved) May 16 09:41:59.819659 kernel: devtmpfs: initialized May 16 09:41:59.819666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 09:41:59.819673 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 09:41:59.819680 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 09:41:59.819687 kernel: 0 pages in range for non-PLT usage May 16 09:41:59.819695 kernel: 508544 pages in range for PLT usage May 16 09:41:59.819702 kernel: pinctrl core: initialized pinctrl subsystem May 16 09:41:59.819709 kernel: SMBIOS 3.0.0 present. May 16 09:41:59.819716 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 09:41:59.819723 kernel: DMI: Memory slots populated: 1/1 May 16 09:41:59.819730 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 09:41:59.819737 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 09:41:59.819763 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 09:41:59.819772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 09:41:59.819781 kernel: audit: initializing netlink subsys (disabled) May 16 09:41:59.819788 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 May 16 09:41:59.819795 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 09:41:59.819802 kernel: cpuidle: using governor menu May 16 09:41:59.819809 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 09:41:59.819816 kernel: ASID allocator initialised with 32768 entries May 16 09:41:59.819823 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 09:41:59.819830 kernel: Serial: AMBA PL011 UART driver May 16 09:41:59.819837 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 09:41:59.819845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 09:41:59.819852 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 09:41:59.819859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 09:41:59.819866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 09:41:59.819873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 09:41:59.819880 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 09:41:59.819887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 09:41:59.819894 kernel: ACPI: Added _OSI(Module Device) May 16 09:41:59.819901 kernel: ACPI: Added _OSI(Processor Device) May 16 09:41:59.819909 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 09:41:59.819916 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 09:41:59.819922 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 09:41:59.819929 kernel: ACPI: Interpreter enabled May 16 09:41:59.819936 kernel: ACPI: Using GIC for interrupt routing May 16 09:41:59.819943 kernel: ACPI: MCFG table detected, 1 entries May 16 09:41:59.819950 kernel: ACPI: CPU0 has been hot-added May 16 09:41:59.819957 kernel: ACPI: CPU1 has been hot-added May 16 09:41:59.819963 kernel: ACPI: CPU2 has been hot-added May 16 09:41:59.819971 kernel: ACPI: CPU3 has been hot-added May 16 09:41:59.819978 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 09:41:59.819985 kernel: printk: legacy console [ttyAMA0] enabled May 16 09:41:59.819992 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 09:41:59.820127 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 09:41:59.820194 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 09:41:59.820253 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 09:41:59.820310 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 09:41:59.820370 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 09:41:59.820380 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 09:41:59.820387 kernel: PCI host bridge to bus 0000:00 May 16 09:41:59.820451 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 09:41:59.820507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 09:41:59.820561 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 09:41:59.820613 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 09:41:59.820694 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 16 09:41:59.820782 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 09:41:59.820847 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 16 09:41:59.820910 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 16 09:41:59.820969 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 16 09:41:59.821029 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 16 09:41:59.821088 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 16 09:41:59.821161 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 16 09:41:59.821217 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 09:41:59.821269 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 09:41:59.821321 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 09:41:59.821330 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 09:41:59.821337 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 09:41:59.821345 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 09:41:59.821354 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 09:41:59.821361 kernel: iommu: Default domain type: Translated May 16 09:41:59.821368 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 09:41:59.821375 kernel: efivars: Registered efivars operations May 16 09:41:59.821382 kernel: vgaarb: loaded May 16 09:41:59.821389 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 09:41:59.821395 kernel: VFS: Disk quotas dquot_6.6.0 May 16 09:41:59.821403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 09:41:59.821409 kernel: pnp: PnP ACPI init May 16 09:41:59.821477 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 09:41:59.821487 kernel: pnp: PnP ACPI: found 1 devices May 16 09:41:59.821494 kernel: NET: Registered PF_INET protocol family May 16 09:41:59.821501 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 09:41:59.821509 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 09:41:59.821516 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 09:41:59.821523 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 09:41:59.821530 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 09:41:59.821538 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 09:41:59.821545 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 09:41:59.821552 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 09:41:59.821559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 09:41:59.821566 kernel: PCI: CLS 0 bytes, default 64 May 16 09:41:59.821573 kernel: kvm [1]: HYP mode not available May 16 09:41:59.821580 kernel: Initialise system trusted keyrings May 16 09:41:59.821587 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 09:41:59.821594 kernel: Key type asymmetric registered May 16 09:41:59.821603 kernel: Asymmetric key parser 'x509' registered May 16 09:41:59.821610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 09:41:59.821617 kernel: io scheduler mq-deadline registered May 16 09:41:59.821623 kernel: io scheduler kyber registered May 16 09:41:59.821630 kernel: io scheduler bfq registered May 16 09:41:59.821638 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 09:41:59.821644 kernel: ACPI: button: Power Button [PWRB] May 16 09:41:59.821652 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 09:41:59.821713 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 09:41:59.821724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 09:41:59.821731 kernel: thunder_xcv, ver 1.0 May 16 09:41:59.821738 kernel: thunder_bgx, ver 1.0 May 16 09:41:59.821756 kernel: nicpf, ver 1.0 May 16 09:41:59.821764 kernel: nicvf, ver 1.0 May 16 09:41:59.821836 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 09:41:59.821893 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T09:41:59 UTC (1747388519) May 16 09:41:59.821902 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 09:41:59.821911 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 16 09:41:59.821918 kernel: watchdog: NMI not fully supported May 16 09:41:59.821925 kernel: watchdog: Hard watchdog permanently disabled May 16 09:41:59.821932 kernel: NET: Registered PF_INET6 protocol family May 16 09:41:59.821939 kernel: Segment Routing with IPv6 May 16 09:41:59.821946 kernel: In-situ OAM (IOAM) with IPv6 May 16 09:41:59.821953 kernel: NET: Registered PF_PACKET protocol family May 16 09:41:59.821960 kernel: Key type dns_resolver registered May 16 09:41:59.821967 kernel: registered taskstats version 1 May 16 09:41:59.821975 kernel: Loading compiled-in X.509 certificates May 16 09:41:59.821982 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: e7b097e50e016e102bfdd733c3ddebaed9ee0e35' May 16 09:41:59.821989 kernel: Demotion targets for Node 0: null May 16 09:41:59.821996 kernel: Key type .fscrypt registered May 16 09:41:59.822002 kernel: Key type fscrypt-provisioning registered May 16 09:41:59.822009 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 09:41:59.822016 kernel: ima: Allocated hash algorithm: sha1 May 16 09:41:59.822023 kernel: ima: No architecture policies found May 16 09:41:59.822030 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 09:41:59.822038 kernel: clk: Disabling unused clocks May 16 09:41:59.822045 kernel: PM: genpd: Disabling unused power domains May 16 09:41:59.822052 kernel: Warning: unable to open an initial console. May 16 09:41:59.822060 kernel: Freeing unused kernel memory: 39424K May 16 09:41:59.822067 kernel: Run /init as init process May 16 09:41:59.822074 kernel: with arguments: May 16 09:41:59.822080 kernel: /init May 16 09:41:59.822087 kernel: with environment: May 16 09:41:59.822094 kernel: HOME=/ May 16 09:41:59.822102 kernel: TERM=linux May 16 09:41:59.822115 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 09:41:59.822123 systemd[1]: Successfully made /usr/ read-only. May 16 09:41:59.822133 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 09:41:59.822141 systemd[1]: Detected virtualization kvm. May 16 09:41:59.822148 systemd[1]: Detected architecture arm64. May 16 09:41:59.822155 systemd[1]: Running in initrd. May 16 09:41:59.822163 systemd[1]: No hostname configured, using default hostname. May 16 09:41:59.822172 systemd[1]: Hostname set to . May 16 09:41:59.822180 systemd[1]: Initializing machine ID from VM UUID. May 16 09:41:59.822187 systemd[1]: Queued start job for default target initrd.target. May 16 09:41:59.822195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:41:59.822202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:41:59.822210 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 09:41:59.822221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 09:41:59.822229 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 09:41:59.822238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 09:41:59.822247 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 09:41:59.822254 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 09:41:59.822262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:41:59.822269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 09:41:59.822277 systemd[1]: Reached target paths.target - Path Units. May 16 09:41:59.822286 systemd[1]: Reached target slices.target - Slice Units. May 16 09:41:59.822293 systemd[1]: Reached target swap.target - Swaps. May 16 09:41:59.822301 systemd[1]: Reached target timers.target - Timer Units. May 16 09:41:59.822308 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 09:41:59.822316 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 09:41:59.822323 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 09:41:59.822331 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 09:41:59.822338 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 09:41:59.822346 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 09:41:59.822355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:41:59.822362 systemd[1]: Reached target sockets.target - Socket Units. May 16 09:41:59.822370 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 09:41:59.822377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 09:41:59.822385 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 09:41:59.822393 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 09:41:59.822400 systemd[1]: Starting systemd-fsck-usr.service... May 16 09:41:59.822408 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 09:41:59.822417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 09:41:59.822424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:41:59.822431 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 09:41:59.822440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:41:59.822447 systemd[1]: Finished systemd-fsck-usr.service. May 16 09:41:59.822456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 09:41:59.822482 systemd-journald[242]: Collecting audit messages is disabled. May 16 09:41:59.822501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:41:59.822509 systemd-journald[242]: Journal started May 16 09:41:59.822529 systemd-journald[242]: Runtime Journal (/run/log/journal/78d4372c09934bcc96a492a9a7f379e6) is 6M, max 48.5M, 42.4M free. May 16 09:41:59.812879 systemd-modules-load[244]: Inserted module 'overlay' May 16 09:41:59.825352 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 09:41:59.828759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 09:41:59.830533 systemd[1]: Started systemd-journald.service - Journal Service. May 16 09:41:59.832952 systemd-modules-load[244]: Inserted module 'br_netfilter' May 16 09:41:59.833769 kernel: Bridge firewalling registered May 16 09:41:59.835851 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 09:41:59.837202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 09:41:59.841136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 09:41:59.842573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 09:41:59.844322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 09:41:59.848868 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 09:41:59.851060 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 09:41:59.855916 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 09:41:59.858824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 09:41:59.860030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:41:59.861793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:41:59.865533 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 09:41:59.867564 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efb8cca3b981587a1314d5462995d10283ca386e95a1cc1f8f2d642520bcc17 May 16 09:41:59.906123 systemd-resolved[296]: Positive Trust Anchors: May 16 09:41:59.906138 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 09:41:59.906170 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 09:41:59.912043 systemd-resolved[296]: Defaulting to hostname 'linux'. May 16 09:41:59.912977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 09:41:59.914472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 09:41:59.937774 kernel: SCSI subsystem initialized May 16 09:41:59.941766 kernel: Loading iSCSI transport class v2.0-870. May 16 09:41:59.950793 kernel: iscsi: registered transport (tcp) May 16 09:41:59.961768 kernel: iscsi: registered transport (qla4xxx) May 16 09:41:59.961792 kernel: QLogic iSCSI HBA Driver May 16 09:41:59.976720 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 09:41:59.993783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:41:59.995846 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 09:42:00.038818 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 09:42:00.041042 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 09:42:00.097777 kernel: raid6: neonx8 gen() 15780 MB/s May 16 09:42:00.114795 kernel: raid6: neonx4 gen() 15821 MB/s May 16 09:42:00.131758 kernel: raid6: neonx2 gen() 13289 MB/s May 16 09:42:00.148770 kernel: raid6: neonx1 gen() 10486 MB/s May 16 09:42:00.165767 kernel: raid6: int64x8 gen() 6895 MB/s May 16 09:42:00.182760 kernel: raid6: int64x4 gen() 7352 MB/s May 16 09:42:00.199766 kernel: raid6: int64x2 gen() 6104 MB/s May 16 09:42:00.216768 kernel: raid6: int64x1 gen() 5044 MB/s May 16 09:42:00.216805 kernel: raid6: using algorithm neonx4 gen() 15821 MB/s May 16 09:42:00.233776 kernel: raid6: .... xor() 12303 MB/s, rmw enabled May 16 09:42:00.233801 kernel: raid6: using neon recovery algorithm May 16 09:42:00.241051 kernel: xor: measuring software checksum speed May 16 09:42:00.241071 kernel: 8regs : 20886 MB/sec May 16 09:42:00.241081 kernel: 32regs : 21710 MB/sec May 16 09:42:00.241965 kernel: arm64_neon : 27359 MB/sec May 16 09:42:00.241978 kernel: xor: using function: arm64_neon (27359 MB/sec) May 16 09:42:00.296770 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 09:42:00.303313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 09:42:00.305832 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:42:00.338063 systemd-udevd[497]: Using default interface naming scheme 'v255'. May 16 09:42:00.342119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:42:00.344498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 09:42:00.369208 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 16 09:42:00.390903 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 09:42:00.392800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 09:42:00.444716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:42:00.446354 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 09:42:00.490253 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 09:42:00.506572 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 09:42:00.506690 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 09:42:00.506701 kernel: GPT:9289727 != 19775487 May 16 09:42:00.506710 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 09:42:00.506719 kernel: GPT:9289727 != 19775487 May 16 09:42:00.506727 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 09:42:00.506736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:42:00.496877 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 09:42:00.496985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:42:00.498589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:42:00.503068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:42:00.531411 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 09:42:00.533855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:42:00.542331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 09:42:00.543644 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 09:42:00.556869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 09:42:00.557732 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 09:42:00.566759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 09:42:00.567633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 09:42:00.569567 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:42:00.571449 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 09:42:00.573781 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 09:42:00.575487 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 09:42:00.591373 disk-uuid[591]: Primary Header is updated. May 16 09:42:00.591373 disk-uuid[591]: Secondary Entries is updated. May 16 09:42:00.591373 disk-uuid[591]: Secondary Header is updated. May 16 09:42:00.596298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:42:00.594495 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 09:42:01.604613 disk-uuid[596]: The operation has completed successfully. May 16 09:42:01.605666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 09:42:01.626431 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 09:42:01.627472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 09:42:01.656657 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 09:42:01.668628 sh[612]: Success May 16 09:42:01.682776 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 09:42:01.682826 kernel: device-mapper: uevent: version 1.0.3 May 16 09:42:01.684764 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 09:42:01.694788 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 16 09:42:01.721050 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 09:42:01.722997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 09:42:01.738871 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 09:42:01.746555 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 09:42:01.746585 kernel: BTRFS: device fsid 9108ecbf-b780-4a5b-b31c-dcb97545c897 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (624) May 16 09:42:01.748273 kernel: BTRFS info (device dm-0): first mount of filesystem 9108ecbf-b780-4a5b-b31c-dcb97545c897 May 16 09:42:01.748300 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 09:42:01.748311 kernel: BTRFS info (device dm-0): using free-space-tree May 16 09:42:01.752947 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 09:42:01.753877 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 09:42:01.755555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 09:42:01.756255 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 09:42:01.757845 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 09:42:01.772821 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 16 09:42:01.774861 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:42:01.774895 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:42:01.774905 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:42:01.781794 kernel: BTRFS info (device vda6): last unmount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:42:01.783038 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 09:42:01.785425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 09:42:01.854187 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 09:42:01.857526 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 09:42:01.893414 systemd-networkd[800]: lo: Link UP May 16 09:42:01.893425 systemd-networkd[800]: lo: Gained carrier May 16 09:42:01.894178 systemd-networkd[800]: Enumeration completed May 16 09:42:01.894279 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 09:42:01.894898 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:42:01.894902 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 09:42:01.895473 systemd-networkd[800]: eth0: Link UP May 16 09:42:01.895476 systemd-networkd[800]: eth0: Gained carrier May 16 09:42:01.895484 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:42:01.896631 systemd[1]: Reached target network.target - Network. May 16 09:42:01.914786 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 09:42:01.922461 ignition[700]: Ignition 2.21.0 May 16 09:42:01.922476 ignition[700]: Stage: fetch-offline May 16 09:42:01.922509 ignition[700]: no configs at "/usr/lib/ignition/base.d" May 16 09:42:01.922517 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:01.922717 ignition[700]: parsed url from cmdline: "" May 16 09:42:01.922720 ignition[700]: no config URL provided May 16 09:42:01.922724 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" May 16 09:42:01.922731 ignition[700]: no config at "/usr/lib/ignition/user.ign" May 16 09:42:01.922764 ignition[700]: op(1): [started] loading QEMU firmware config module May 16 09:42:01.922768 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 09:42:01.933188 ignition[700]: op(1): [finished] loading QEMU firmware config module May 16 09:42:01.970396 ignition[700]: parsing config with SHA512: 43304942114956b5d4c9c7a56ce1ac9d81e2e7d888325ea3cb6a003364c0e1a15d30d969f7dffb4aed93589295c12463f3e3a64433cfa67fd93679a6962365ff May 16 09:42:01.974632 unknown[700]: fetched base config from "system" May 16 09:42:01.974643 unknown[700]: fetched user config from "qemu" May 16 09:42:01.975020 ignition[700]: fetch-offline: fetch-offline passed May 16 09:42:01.975077 ignition[700]: Ignition finished successfully May 16 09:42:01.977553 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 09:42:01.978873 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 09:42:01.979616 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 09:42:02.006998 ignition[814]: Ignition 2.21.0 May 16 09:42:02.007013 ignition[814]: Stage: kargs May 16 09:42:02.007234 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 16 09:42:02.007244 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:02.009159 ignition[814]: kargs: kargs passed May 16 09:42:02.009212 ignition[814]: Ignition finished successfully May 16 09:42:02.012358 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 09:42:02.015600 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 09:42:02.040725 ignition[822]: Ignition 2.21.0 May 16 09:42:02.040739 ignition[822]: Stage: disks May 16 09:42:02.040937 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 16 09:42:02.040946 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:02.042337 ignition[822]: disks: disks passed May 16 09:42:02.044314 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 09:42:02.042395 ignition[822]: Ignition finished successfully May 16 09:42:02.045979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 09:42:02.047438 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 09:42:02.049141 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 09:42:02.050914 systemd[1]: Reached target sysinit.target - System Initialization. May 16 09:42:02.052822 systemd[1]: Reached target basic.target - Basic System. May 16 09:42:02.055334 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 09:42:02.080639 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 09:42:02.084986 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 09:42:02.087540 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 09:42:02.153770 kernel: EXT4-fs (vda9): mounted filesystem a09a4a8b-405d-466b-850e-ba0196efa117 r/w with ordered data mode. Quota mode: none. May 16 09:42:02.154543 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 09:42:02.155730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 09:42:02.157840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 09:42:02.159443 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 09:42:02.160427 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 09:42:02.160467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 09:42:02.160503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 09:42:02.172386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 09:42:02.174713 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 09:42:02.177763 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (841) May 16 09:42:02.179341 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:42:02.179365 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:42:02.179376 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:42:02.182241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 09:42:02.220524 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory May 16 09:42:02.224546 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory May 16 09:42:02.228445 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory May 16 09:42:02.231988 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory May 16 09:42:02.298589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 09:42:02.300651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 09:42:02.302195 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 09:42:02.319771 kernel: BTRFS info (device vda6): last unmount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:42:02.331251 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 09:42:02.336492 ignition[955]: INFO : Ignition 2.21.0 May 16 09:42:02.336492 ignition[955]: INFO : Stage: mount May 16 09:42:02.338063 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:42:02.338063 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:02.340144 ignition[955]: INFO : mount: mount passed May 16 09:42:02.340144 ignition[955]: INFO : Ignition finished successfully May 16 09:42:02.340483 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 09:42:02.343565 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 09:42:02.868824 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 09:42:02.870319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 09:42:02.887759 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (968) May 16 09:42:02.889877 kernel: BTRFS info (device vda6): first mount of filesystem 1663b735-9163-4a80-bc0d-8580d7a25027 May 16 09:42:02.889896 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 09:42:02.889907 kernel: BTRFS info (device vda6): using free-space-tree May 16 09:42:02.893393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 09:42:02.916916 ignition[985]: INFO : Ignition 2.21.0 May 16 09:42:02.916916 ignition[985]: INFO : Stage: files May 16 09:42:02.918996 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:42:02.918996 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:02.918996 ignition[985]: DEBUG : files: compiled without relabeling support, skipping May 16 09:42:02.921921 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 09:42:02.921921 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 09:42:02.924337 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 09:42:02.924337 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 09:42:02.924337 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 09:42:02.924094 unknown[985]: wrote ssh authorized keys file for user: core May 16 09:42:02.928823 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 09:42:02.928823 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 09:42:03.584889 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 09:42:03.925954 systemd-networkd[800]: eth0: Gained IPv6LL May 16 09:42:05.409631 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 09:42:05.411514 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 16 09:42:05.424883 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 16 09:42:05.810327 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 09:42:06.852471 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 16 09:42:06.852471 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 09:42:06.856468 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 09:42:06.872617 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 09:42:06.876634 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 09:42:06.878905 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 09:42:06.878905 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 09:42:06.878905 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 09:42:06.878905 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 09:42:06.878905 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 09:42:06.878905 ignition[985]: INFO : files: files passed May 16 09:42:06.878905 ignition[985]: INFO : Ignition finished successfully May 16 09:42:06.881782 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 09:42:06.884887 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 09:42:06.889267 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 09:42:06.896736 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 09:42:06.898111 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory May 16 09:42:06.898405 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 09:42:06.903101 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 09:42:06.903101 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 09:42:06.906397 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 09:42:06.905242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 09:42:06.907779 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 09:42:06.909475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 09:42:06.955713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 09:42:06.956765 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 09:42:06.958165 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 09:42:06.959934 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 09:42:06.961787 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 09:42:06.962571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 09:42:06.991699 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 09:42:06.994204 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 09:42:07.021013 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 09:42:07.022303 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:42:07.024298 systemd[1]: Stopped target timers.target - Timer Units. May 16 09:42:07.026055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 09:42:07.026185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 09:42:07.028647 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 09:42:07.030752 systemd[1]: Stopped target basic.target - Basic System. May 16 09:42:07.033216 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 09:42:07.034895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 09:42:07.036823 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 09:42:07.038789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 09:42:07.040708 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 09:42:07.042547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 09:42:07.044477 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 09:42:07.046422 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 09:42:07.048248 systemd[1]: Stopped target swap.target - Swaps. May 16 09:42:07.049712 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 09:42:07.049862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 09:42:07.052140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 09:42:07.054021 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:42:07.055941 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 09:42:07.056854 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:42:07.057830 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 09:42:07.057955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 09:42:07.060822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 09:42:07.060946 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 09:42:07.062870 systemd[1]: Stopped target paths.target - Path Units. May 16 09:42:07.064486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 09:42:07.064596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:42:07.066496 systemd[1]: Stopped target slices.target - Slice Units. May 16 09:42:07.068287 systemd[1]: Stopped target sockets.target - Socket Units. May 16 09:42:07.069788 systemd[1]: iscsid.socket: Deactivated successfully. May 16 09:42:07.069867 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 09:42:07.071558 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 09:42:07.071634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 09:42:07.073767 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 09:42:07.073886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 09:42:07.075583 systemd[1]: ignition-files.service: Deactivated successfully. May 16 09:42:07.075681 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 09:42:07.077979 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 09:42:07.080131 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 09:42:07.080974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 09:42:07.081121 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:42:07.082957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 09:42:07.083054 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 09:42:07.090004 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 09:42:07.091784 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 09:42:07.094509 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 09:42:07.098620 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 09:42:07.100661 ignition[1040]: INFO : Ignition 2.21.0 May 16 09:42:07.100661 ignition[1040]: INFO : Stage: umount May 16 09:42:07.100661 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 09:42:07.100661 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 09:42:07.098724 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 09:42:07.106468 ignition[1040]: INFO : umount: umount passed May 16 09:42:07.106468 ignition[1040]: INFO : Ignition finished successfully May 16 09:42:07.105643 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 09:42:07.105770 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 09:42:07.107593 systemd[1]: Stopped target network.target - Network. May 16 09:42:07.108949 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 09:42:07.109013 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 09:42:07.110640 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 09:42:07.110688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 09:42:07.112449 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 09:42:07.112502 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 09:42:07.114237 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 09:42:07.114280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 09:42:07.115835 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 09:42:07.115882 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 09:42:07.117703 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 09:42:07.119414 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 09:42:07.123416 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 09:42:07.123517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 09:42:07.126650 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 09:42:07.126888 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 09:42:07.126925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:42:07.130322 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 09:42:07.135553 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 09:42:07.135650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 09:42:07.138545 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 09:42:07.138670 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 09:42:07.140211 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 09:42:07.140251 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 09:42:07.142672 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 09:42:07.143562 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 09:42:07.143616 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 09:42:07.147103 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 09:42:07.147153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 09:42:07.150051 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 09:42:07.150105 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 09:42:07.152429 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:42:07.155102 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 09:42:07.169339 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 09:42:07.169486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:42:07.171664 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 09:42:07.171773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 09:42:07.173892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 09:42:07.173958 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 09:42:07.175104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 09:42:07.175136 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:42:07.176780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 09:42:07.176825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 09:42:07.179423 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 09:42:07.179467 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 09:42:07.182077 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 09:42:07.182127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 09:42:07.185497 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 09:42:07.186694 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 09:42:07.186764 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:42:07.189487 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 09:42:07.189530 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:42:07.192714 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 09:42:07.192768 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 09:42:07.195824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 09:42:07.195889 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:42:07.198193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 09:42:07.198238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:42:07.212586 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 09:42:07.212675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 09:42:07.214927 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 09:42:07.217389 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 09:42:07.249208 systemd[1]: Switching root. May 16 09:42:07.280922 systemd-journald[242]: Journal stopped May 16 09:42:08.035072 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). May 16 09:42:08.035118 kernel: SELinux: policy capability network_peer_controls=1 May 16 09:42:08.035135 kernel: SELinux: policy capability open_perms=1 May 16 09:42:08.035147 kernel: SELinux: policy capability extended_socket_class=1 May 16 09:42:08.035156 kernel: SELinux: policy capability always_check_network=0 May 16 09:42:08.035166 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 09:42:08.035187 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 09:42:08.035201 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 09:42:08.035213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 09:42:08.035222 kernel: SELinux: policy capability userspace_initial_context=0 May 16 09:42:08.035231 kernel: audit: type=1403 audit(1747388527.445:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 09:42:08.035241 systemd[1]: Successfully loaded SELinux policy in 45.825ms. May 16 09:42:08.035253 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.098ms. May 16 09:42:08.035264 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 09:42:08.035277 systemd[1]: Detected virtualization kvm. May 16 09:42:08.035287 systemd[1]: Detected architecture arm64. May 16 09:42:08.035296 systemd[1]: Detected first boot. May 16 09:42:08.035306 systemd[1]: Initializing machine ID from VM UUID. May 16 09:42:08.035317 kernel: NET: Registered PF_VSOCK protocol family May 16 09:42:08.035327 zram_generator::config[1086]: No configuration found. May 16 09:42:08.035338 systemd[1]: Populated /etc with preset unit settings. May 16 09:42:08.035348 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 09:42:08.035358 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 09:42:08.035369 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 09:42:08.035379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 09:42:08.035388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 09:42:08.035398 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 09:42:08.035408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 09:42:08.035418 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 09:42:08.035427 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 09:42:08.035438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 09:42:08.035449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 09:42:08.035459 systemd[1]: Created slice user.slice - User and Session Slice. May 16 09:42:08.035469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 09:42:08.035478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 09:42:08.035488 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 09:42:08.035498 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 09:42:08.035508 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 09:42:08.035517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 09:42:08.035527 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 09:42:08.035538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 09:42:08.035548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 09:42:08.035558 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 09:42:08.035568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 09:42:08.035577 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 09:42:08.035587 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 09:42:08.035597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 09:42:08.035606 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 09:42:08.035618 systemd[1]: Reached target slices.target - Slice Units. May 16 09:42:08.035627 systemd[1]: Reached target swap.target - Swaps. May 16 09:42:08.035637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 09:42:08.035647 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 09:42:08.035657 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 09:42:08.035666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 09:42:08.035676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 09:42:08.035686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 09:42:08.035696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 09:42:08.035707 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 09:42:08.035716 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 09:42:08.035726 systemd[1]: Mounting media.mount - External Media Directory... May 16 09:42:08.035736 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 09:42:08.036039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 09:42:08.036069 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 09:42:08.036080 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 09:42:08.036091 systemd[1]: Reached target machines.target - Containers. May 16 09:42:08.036105 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 09:42:08.036115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:42:08.036125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 09:42:08.036135 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 09:42:08.036145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:42:08.036155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 09:42:08.036164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:42:08.036174 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 09:42:08.036184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:42:08.036196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 09:42:08.036206 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 09:42:08.036216 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 09:42:08.036226 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 09:42:08.036235 systemd[1]: Stopped systemd-fsck-usr.service. May 16 09:42:08.036246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:42:08.036256 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 09:42:08.036266 kernel: fuse: init (API version 7.41) May 16 09:42:08.036277 kernel: loop: module loaded May 16 09:42:08.036286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 09:42:08.036296 kernel: ACPI: bus type drm_connector registered May 16 09:42:08.036305 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 09:42:08.036316 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 09:42:08.036326 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 09:42:08.036336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 09:42:08.036347 systemd[1]: verity-setup.service: Deactivated successfully. May 16 09:42:08.036357 systemd[1]: Stopped verity-setup.service. May 16 09:42:08.036367 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 09:42:08.036377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 09:42:08.036386 systemd[1]: Mounted media.mount - External Media Directory. May 16 09:42:08.036396 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 09:42:08.036429 systemd-journald[1158]: Collecting audit messages is disabled. May 16 09:42:08.036461 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 09:42:08.036471 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 09:42:08.036482 systemd-journald[1158]: Journal started May 16 09:42:08.036505 systemd-journald[1158]: Runtime Journal (/run/log/journal/78d4372c09934bcc96a492a9a7f379e6) is 6M, max 48.5M, 42.4M free. May 16 09:42:07.827628 systemd[1]: Queued start job for default target multi-user.target. May 16 09:42:07.852563 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 09:42:07.852934 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 09:42:08.039397 systemd[1]: Started systemd-journald.service - Journal Service. May 16 09:42:08.041188 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 09:42:08.043358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 09:42:08.044873 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 09:42:08.045180 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 09:42:08.046606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:42:08.048856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:42:08.050291 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 09:42:08.050467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 09:42:08.051800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:42:08.051962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:42:08.053474 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 09:42:08.053659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 09:42:08.054979 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:42:08.055150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:42:08.056478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 09:42:08.059116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 09:42:08.060635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 09:42:08.062204 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 09:42:08.073974 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 09:42:08.076341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 09:42:08.078357 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 09:42:08.079506 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 09:42:08.079542 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 09:42:08.081411 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 09:42:08.089675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 09:42:08.090831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:42:08.092182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 09:42:08.094273 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 09:42:08.095531 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 09:42:08.097883 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 09:42:08.098950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 09:42:08.099783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 09:42:08.101723 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 09:42:08.103939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 09:42:08.106405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 09:42:08.108090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 09:42:08.109452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 09:42:08.110823 systemd-journald[1158]: Time spent on flushing to /var/log/journal/78d4372c09934bcc96a492a9a7f379e6 is 16.940ms for 887 entries. May 16 09:42:08.110823 systemd-journald[1158]: System Journal (/var/log/journal/78d4372c09934bcc96a492a9a7f379e6) is 8M, max 195.6M, 187.6M free. May 16 09:42:08.150291 systemd-journald[1158]: Received client request to flush runtime journal. May 16 09:42:08.150346 kernel: loop0: detected capacity change from 0 to 201592 May 16 09:42:08.120933 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 09:42:08.122305 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 09:42:08.125324 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 09:42:08.128901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 09:42:08.142527 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 16 09:42:08.142537 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. May 16 09:42:08.147494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 09:42:08.150799 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 09:42:08.153804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 09:42:08.155610 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 09:42:08.159646 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 09:42:08.178782 kernel: loop1: detected capacity change from 0 to 138376 May 16 09:42:08.185366 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 09:42:08.188532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 09:42:08.199866 kernel: loop2: detected capacity change from 0 to 107312 May 16 09:42:08.211282 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 16 09:42:08.211624 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 16 09:42:08.215742 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 09:42:08.223781 kernel: loop3: detected capacity change from 0 to 201592 May 16 09:42:08.230280 kernel: loop4: detected capacity change from 0 to 138376 May 16 09:42:08.236792 kernel: loop5: detected capacity change from 0 to 107312 May 16 09:42:08.241693 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 09:42:08.242082 (sd-merge)[1230]: Merged extensions into '/usr'. May 16 09:42:08.246226 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... May 16 09:42:08.246241 systemd[1]: Reloading... May 16 09:42:08.297780 zram_generator::config[1253]: No configuration found. May 16 09:42:08.383402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:42:08.386079 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 09:42:08.446476 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 09:42:08.446947 systemd[1]: Reloading finished in 200 ms. May 16 09:42:08.483351 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 09:42:08.486819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 09:42:08.506110 systemd[1]: Starting ensure-sysext.service... May 16 09:42:08.510066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 09:42:08.521900 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... May 16 09:42:08.521914 systemd[1]: Reloading... May 16 09:42:08.524334 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 09:42:08.524361 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 09:42:08.524583 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 09:42:08.525089 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 09:42:08.525850 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 09:42:08.526173 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. May 16 09:42:08.526289 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. May 16 09:42:08.528899 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. May 16 09:42:08.529004 systemd-tmpfiles[1292]: Skipping /boot May 16 09:42:08.537351 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. May 16 09:42:08.537447 systemd-tmpfiles[1292]: Skipping /boot May 16 09:42:08.568816 zram_generator::config[1319]: No configuration found. May 16 09:42:08.633406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:42:08.695181 systemd[1]: Reloading finished in 173 ms. May 16 09:42:08.722779 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 09:42:08.728273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 09:42:08.736797 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:42:08.739190 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 09:42:08.750936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 09:42:08.754470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 09:42:08.760015 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 09:42:08.763124 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 09:42:08.776342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 09:42:08.780877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:42:08.781938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:42:08.785106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:42:08.789175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:42:08.790527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:42:08.790668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:42:08.795913 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 09:42:08.801205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:42:08.802839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:42:08.805152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:42:08.805391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:42:08.807489 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:42:08.807701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:42:08.811689 systemd-udevd[1361]: Using default interface naming scheme 'v255'. May 16 09:42:08.811922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 09:42:08.812370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 09:42:08.818014 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 09:42:08.823788 augenrules[1390]: No rules May 16 09:42:08.821288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 09:42:08.823066 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:42:08.823279 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:42:08.824815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 09:42:08.832229 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:42:08.833487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 09:42:08.834575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 09:42:08.837240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 09:42:08.840939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 09:42:08.850977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 09:42:08.852108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 09:42:08.852263 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 09:42:08.852379 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 09:42:08.853636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 09:42:08.868509 augenrules[1398]: /sbin/augenrules: No change May 16 09:42:08.878229 augenrules[1451]: No rules May 16 09:42:08.892309 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 09:42:08.895781 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 09:42:08.897260 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:42:08.897419 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:42:08.900232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 09:42:08.900386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 09:42:08.902178 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 09:42:08.902517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 09:42:08.904073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 09:42:08.904298 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 09:42:08.906401 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 09:42:08.906555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 09:42:08.910327 systemd[1]: Finished ensure-sysext.service. May 16 09:42:08.923588 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 09:42:08.925883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 09:42:08.925939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 09:42:08.928560 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 09:42:08.931031 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 09:42:08.958066 systemd-resolved[1360]: Positive Trust Anchors: May 16 09:42:08.958080 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 09:42:08.958111 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 09:42:08.967103 systemd-resolved[1360]: Defaulting to hostname 'linux'. May 16 09:42:08.968463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 09:42:08.969987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 09:42:09.006144 systemd-networkd[1464]: lo: Link UP May 16 09:42:09.006152 systemd-networkd[1464]: lo: Gained carrier May 16 09:42:09.007481 systemd-networkd[1464]: Enumeration completed May 16 09:42:09.007947 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:42:09.008011 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 09:42:09.008509 systemd-networkd[1464]: eth0: Link UP May 16 09:42:09.008693 systemd-networkd[1464]: eth0: Gained carrier May 16 09:42:09.008756 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 09:42:09.008902 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 09:42:09.010015 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 09:42:09.013114 systemd[1]: Reached target network.target - Network. May 16 09:42:09.014946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 09:42:09.017109 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 09:42:09.023968 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 09:42:09.026000 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 09:42:09.026803 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 09:42:09.027555 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. May 16 09:42:09.027928 systemd[1]: Reached target sysinit.target - System Initialization. May 16 09:42:09.029342 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 09:42:09.031219 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 09:42:09.032504 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 09:42:09.034290 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 09:42:09.034320 systemd[1]: Reached target paths.target - Path Units. May 16 09:42:09.035251 systemd[1]: Reached target time-set.target - System Time Set. May 16 09:42:09.035690 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 09:42:09.035759 systemd-timesyncd[1465]: Initial clock synchronization to Fri 2025-05-16 09:42:08.987084 UTC. May 16 09:42:09.036504 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 09:42:09.038954 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 09:42:09.040131 systemd[1]: Reached target timers.target - Timer Units. May 16 09:42:09.042017 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 09:42:09.044190 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 09:42:09.047879 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 09:42:09.049209 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 09:42:09.050423 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 09:42:09.054832 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 09:42:09.056494 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 09:42:09.059779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 09:42:09.061925 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 09:42:09.063359 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 09:42:09.065485 systemd[1]: Reached target sockets.target - Socket Units. May 16 09:42:09.066545 systemd[1]: Reached target basic.target - Basic System. May 16 09:42:09.067581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 09:42:09.067612 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 09:42:09.070329 systemd[1]: Starting containerd.service - containerd container runtime... May 16 09:42:09.074924 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 09:42:09.077771 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 09:42:09.082713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 09:42:09.099490 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 09:42:09.100579 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 09:42:09.101032 jq[1500]: false May 16 09:42:09.104904 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 09:42:09.107007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 09:42:09.111974 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 09:42:09.115924 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 09:42:09.125532 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 09:42:09.127376 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 09:42:09.128118 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 09:42:09.130894 systemd[1]: Starting update-engine.service - Update Engine... May 16 09:42:09.133930 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 09:42:09.137466 extend-filesystems[1501]: Found loop3 May 16 09:42:09.137466 extend-filesystems[1501]: Found loop4 May 16 09:42:09.137987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 09:42:09.144139 extend-filesystems[1501]: Found loop5 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda May 16 09:42:09.144139 extend-filesystems[1501]: Found vda1 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda2 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda3 May 16 09:42:09.144139 extend-filesystems[1501]: Found usr May 16 09:42:09.144139 extend-filesystems[1501]: Found vda4 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda6 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda7 May 16 09:42:09.144139 extend-filesystems[1501]: Found vda9 May 16 09:42:09.144139 extend-filesystems[1501]: Checking size of /dev/vda9 May 16 09:42:09.145106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 09:42:09.173456 jq[1515]: true May 16 09:42:09.145310 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 09:42:09.145561 systemd[1]: motdgen.service: Deactivated successfully. May 16 09:42:09.145729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 09:42:09.151695 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 09:42:09.151893 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 09:42:09.167725 (ntainerd)[1522]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 09:42:09.174663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 09:42:09.181104 extend-filesystems[1501]: Resized partition /dev/vda9 May 16 09:42:09.193781 jq[1525]: true May 16 09:42:09.205767 extend-filesystems[1535]: resize2fs 1.47.2 (1-Jan-2025) May 16 09:42:09.214387 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (Power Button) May 16 09:42:09.215532 tar[1521]: linux-arm64/LICENSE May 16 09:42:09.215767 tar[1521]: linux-arm64/helm May 16 09:42:09.216915 systemd-logind[1512]: New seat seat0. May 16 09:42:09.222771 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 09:42:09.225807 systemd[1]: Started systemd-logind.service - User Login Management. May 16 09:42:09.244404 dbus-daemon[1497]: [system] SELinux support is enabled May 16 09:42:09.246925 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 09:42:09.252481 update_engine[1513]: I20250516 09:42:09.251885 1513 main.cc:92] Flatcar Update Engine starting May 16 09:42:09.252574 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 09:42:09.252599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 09:42:09.254384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 09:42:09.256596 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 09:42:09.254410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 09:42:09.260802 update_engine[1513]: I20250516 09:42:09.260722 1513 update_check_scheduler.cc:74] Next update check in 8m35s May 16 09:42:09.260856 systemd[1]: Started update-engine.service - Update Engine. May 16 09:42:09.263531 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 09:42:09.281854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 09:42:09.285765 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 09:42:09.311759 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 09:42:09.311759 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 09:42:09.311759 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 09:42:09.316095 extend-filesystems[1501]: Resized filesystem in /dev/vda9 May 16 09:42:09.315608 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 09:42:09.317096 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 09:42:09.331994 bash[1559]: Updated "/home/core/.ssh/authorized_keys" May 16 09:42:09.333728 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 09:42:09.336712 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 09:42:09.342364 locksmithd[1551]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 09:42:09.409956 containerd[1522]: time="2025-05-16T09:42:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 09:42:09.413285 containerd[1522]: time="2025-05-16T09:42:09.413251960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 09:42:09.422791 containerd[1522]: time="2025-05-16T09:42:09.422735040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.48µs" May 16 09:42:09.422826 containerd[1522]: time="2025-05-16T09:42:09.422790400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 09:42:09.422826 containerd[1522]: time="2025-05-16T09:42:09.422808600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 09:42:09.423014 containerd[1522]: time="2025-05-16T09:42:09.422962920Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 09:42:09.423036 containerd[1522]: time="2025-05-16T09:42:09.423016800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 09:42:09.423081 containerd[1522]: time="2025-05-16T09:42:09.423050040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 09:42:09.423143 containerd[1522]: time="2025-05-16T09:42:09.423125720Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 09:42:09.423143 containerd[1522]: time="2025-05-16T09:42:09.423140280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 09:42:09.423412 containerd[1522]: time="2025-05-16T09:42:09.423390160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 09:42:09.423448 containerd[1522]: time="2025-05-16T09:42:09.423411520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 09:42:09.423448 containerd[1522]: time="2025-05-16T09:42:09.423434480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 09:42:09.423448 containerd[1522]: time="2025-05-16T09:42:09.423442640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 09:42:09.423541 containerd[1522]: time="2025-05-16T09:42:09.423525120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 09:42:09.423757 containerd[1522]: time="2025-05-16T09:42:09.423722440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 09:42:09.423800 containerd[1522]: time="2025-05-16T09:42:09.423782360Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 09:42:09.423800 containerd[1522]: time="2025-05-16T09:42:09.423797160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 09:42:09.423859 containerd[1522]: time="2025-05-16T09:42:09.423844440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 09:42:09.424159 containerd[1522]: time="2025-05-16T09:42:09.424130680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 09:42:09.424254 containerd[1522]: time="2025-05-16T09:42:09.424237160Z" level=info msg="metadata content store policy set" policy=shared May 16 09:42:09.428880 containerd[1522]: time="2025-05-16T09:42:09.428846960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 09:42:09.428939 containerd[1522]: time="2025-05-16T09:42:09.428899960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 09:42:09.428939 containerd[1522]: time="2025-05-16T09:42:09.428913640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 09:42:09.428939 containerd[1522]: time="2025-05-16T09:42:09.428925280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 09:42:09.428939 containerd[1522]: time="2025-05-16T09:42:09.428939080Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 09:42:09.429021 containerd[1522]: time="2025-05-16T09:42:09.428982560Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 09:42:09.429021 containerd[1522]: time="2025-05-16T09:42:09.428996120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 09:42:09.429021 containerd[1522]: time="2025-05-16T09:42:09.429007640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 09:42:09.429021 containerd[1522]: time="2025-05-16T09:42:09.429019080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 09:42:09.429097 containerd[1522]: time="2025-05-16T09:42:09.429029320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 09:42:09.429097 containerd[1522]: time="2025-05-16T09:42:09.429038760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 09:42:09.429097 containerd[1522]: time="2025-05-16T09:42:09.429059760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 09:42:09.429200 containerd[1522]: time="2025-05-16T09:42:09.429181000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 09:42:09.429223 containerd[1522]: time="2025-05-16T09:42:09.429205680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 09:42:09.429241 containerd[1522]: time="2025-05-16T09:42:09.429227640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 09:42:09.429258 containerd[1522]: time="2025-05-16T09:42:09.429243880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 09:42:09.429258 containerd[1522]: time="2025-05-16T09:42:09.429254600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 09:42:09.429288 containerd[1522]: time="2025-05-16T09:42:09.429265680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 09:42:09.429288 containerd[1522]: time="2025-05-16T09:42:09.429277280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 09:42:09.429327 containerd[1522]: time="2025-05-16T09:42:09.429299240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 09:42:09.429327 containerd[1522]: time="2025-05-16T09:42:09.429315360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 09:42:09.429362 containerd[1522]: time="2025-05-16T09:42:09.429326600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 09:42:09.429362 containerd[1522]: time="2025-05-16T09:42:09.429336600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 09:42:09.429525 containerd[1522]: time="2025-05-16T09:42:09.429509320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 09:42:09.429549 containerd[1522]: time="2025-05-16T09:42:09.429526720Z" level=info msg="Start snapshots syncer" May 16 09:42:09.429578 containerd[1522]: time="2025-05-16T09:42:09.429564320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 09:42:09.429932 containerd[1522]: time="2025-05-16T09:42:09.429897840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 09:42:09.430027 containerd[1522]: time="2025-05-16T09:42:09.429950880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 09:42:09.430055 containerd[1522]: time="2025-05-16T09:42:09.430027880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 09:42:09.430204 containerd[1522]: time="2025-05-16T09:42:09.430181920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 09:42:09.430228 containerd[1522]: time="2025-05-16T09:42:09.430211240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 09:42:09.430228 containerd[1522]: time="2025-05-16T09:42:09.430222720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 09:42:09.430268 containerd[1522]: time="2025-05-16T09:42:09.430234120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 09:42:09.430268 containerd[1522]: time="2025-05-16T09:42:09.430246480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 09:42:09.430268 containerd[1522]: time="2025-05-16T09:42:09.430256640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 09:42:09.430268 containerd[1522]: time="2025-05-16T09:42:09.430266680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 09:42:09.430332 containerd[1522]: time="2025-05-16T09:42:09.430289960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 09:42:09.430332 containerd[1522]: time="2025-05-16T09:42:09.430300680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 09:42:09.430332 containerd[1522]: time="2025-05-16T09:42:09.430310320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 09:42:09.430382 containerd[1522]: time="2025-05-16T09:42:09.430352920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 09:42:09.430382 containerd[1522]: time="2025-05-16T09:42:09.430367960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 09:42:09.430382 containerd[1522]: time="2025-05-16T09:42:09.430375880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 09:42:09.430432 containerd[1522]: time="2025-05-16T09:42:09.430385040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 09:42:09.430432 containerd[1522]: time="2025-05-16T09:42:09.430392800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 09:42:09.430432 containerd[1522]: time="2025-05-16T09:42:09.430403240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 09:42:09.430480 containerd[1522]: time="2025-05-16T09:42:09.430439960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 09:42:09.430533 containerd[1522]: time="2025-05-16T09:42:09.430518920Z" level=info msg="runtime interface created" May 16 09:42:09.430533 containerd[1522]: time="2025-05-16T09:42:09.430529800Z" level=info msg="created NRI interface" May 16 09:42:09.430574 containerd[1522]: time="2025-05-16T09:42:09.430539720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 09:42:09.430574 containerd[1522]: time="2025-05-16T09:42:09.430553000Z" level=info msg="Connect containerd service" May 16 09:42:09.430742 containerd[1522]: time="2025-05-16T09:42:09.430714720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 09:42:09.432232 containerd[1522]: time="2025-05-16T09:42:09.432197720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 09:42:09.539678 containerd[1522]: time="2025-05-16T09:42:09.539617360Z" level=info msg="Start subscribing containerd event" May 16 09:42:09.539806 containerd[1522]: time="2025-05-16T09:42:09.539731560Z" level=info msg="Start recovering state" May 16 09:42:09.539932 containerd[1522]: time="2025-05-16T09:42:09.539851320Z" level=info msg="Start event monitor" May 16 09:42:09.539955 containerd[1522]: time="2025-05-16T09:42:09.539935440Z" level=info msg="Start cni network conf syncer for default" May 16 09:42:09.539955 containerd[1522]: time="2025-05-16T09:42:09.539945680Z" level=info msg="Start streaming server" May 16 09:42:09.539988 containerd[1522]: time="2025-05-16T09:42:09.539956280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 09:42:09.539988 containerd[1522]: time="2025-05-16T09:42:09.539964320Z" level=info msg="runtime interface starting up..." May 16 09:42:09.539988 containerd[1522]: time="2025-05-16T09:42:09.539970200Z" level=info msg="starting plugins..." May 16 09:42:09.540046 containerd[1522]: time="2025-05-16T09:42:09.539998440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 09:42:09.540179 containerd[1522]: time="2025-05-16T09:42:09.540093600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 09:42:09.540226 containerd[1522]: time="2025-05-16T09:42:09.540210840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 09:42:09.544024 systemd[1]: Started containerd.service - containerd container runtime. May 16 09:42:09.545463 containerd[1522]: time="2025-05-16T09:42:09.545416200Z" level=info msg="containerd successfully booted in 0.135942s" May 16 09:42:09.652493 tar[1521]: linux-arm64/README.md May 16 09:42:09.666636 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 09:42:10.503332 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 09:42:10.522144 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 09:42:10.525181 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 09:42:10.542004 systemd[1]: issuegen.service: Deactivated successfully. May 16 09:42:10.542824 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 09:42:10.545408 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 09:42:10.567838 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 09:42:10.570571 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 09:42:10.574004 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 09:42:10.575376 systemd[1]: Reached target getty.target - Login Prompts. May 16 09:42:10.773919 systemd-networkd[1464]: eth0: Gained IPv6LL May 16 09:42:10.777797 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 09:42:10.779673 systemd[1]: Reached target network-online.target - Network is Online. May 16 09:42:10.782274 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 09:42:10.784613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:10.801252 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 09:42:10.814871 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 09:42:10.815087 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 09:42:10.816709 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 09:42:10.822456 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 09:42:11.312597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:11.314384 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 09:42:11.317002 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 09:42:11.318848 systemd[1]: Startup finished in 2.081s (kernel) + 7.820s (initrd) + 3.922s (userspace) = 13.825s. May 16 09:42:11.703175 kubelet[1633]: E0516 09:42:11.703055 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 09:42:11.705164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 09:42:11.705302 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 09:42:11.705671 systemd[1]: kubelet.service: Consumed 755ms CPU time, 246.8M memory peak. May 16 09:42:12.106642 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 09:42:12.107871 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:34380.service - OpenSSH per-connection server daemon (10.0.0.1:34380). May 16 09:42:12.185951 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 34380 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:12.188028 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:12.196925 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 09:42:12.197863 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 09:42:12.204607 systemd-logind[1512]: New session 1 of user core. May 16 09:42:12.217988 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 09:42:12.220180 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 09:42:12.239250 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 09:42:12.241138 systemd-logind[1512]: New session c1 of user core. May 16 09:42:12.342665 systemd[1651]: Queued start job for default target default.target. May 16 09:42:12.366556 systemd[1651]: Created slice app.slice - User Application Slice. May 16 09:42:12.366582 systemd[1651]: Reached target paths.target - Paths. May 16 09:42:12.366613 systemd[1651]: Reached target timers.target - Timers. May 16 09:42:12.367702 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 09:42:12.375404 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 09:42:12.375457 systemd[1651]: Reached target sockets.target - Sockets. May 16 09:42:12.375490 systemd[1651]: Reached target basic.target - Basic System. May 16 09:42:12.375517 systemd[1651]: Reached target default.target - Main User Target. May 16 09:42:12.375540 systemd[1651]: Startup finished in 129ms. May 16 09:42:12.375697 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 09:42:12.376898 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 09:42:12.435304 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:34392.service - OpenSSH per-connection server daemon (10.0.0.1:34392). May 16 09:42:12.482886 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34392 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:12.484070 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:12.487713 systemd-logind[1512]: New session 2 of user core. May 16 09:42:12.504958 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 09:42:12.555282 sshd[1664]: Connection closed by 10.0.0.1 port 34392 May 16 09:42:12.554825 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 16 09:42:12.564560 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:34392.service: Deactivated successfully. May 16 09:42:12.566886 systemd[1]: session-2.scope: Deactivated successfully. May 16 09:42:12.567457 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. May 16 09:42:12.569340 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:38506.service - OpenSSH per-connection server daemon (10.0.0.1:38506). May 16 09:42:12.570224 systemd-logind[1512]: Removed session 2. May 16 09:42:12.618092 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 38506 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:12.619037 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:12.623339 systemd-logind[1512]: New session 3 of user core. May 16 09:42:12.631895 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 09:42:12.679084 sshd[1672]: Connection closed by 10.0.0.1 port 38506 May 16 09:42:12.679348 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 16 09:42:12.697577 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:38506.service: Deactivated successfully. May 16 09:42:12.700167 systemd[1]: session-3.scope: Deactivated successfully. May 16 09:42:12.700871 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. May 16 09:42:12.703964 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:38520.service - OpenSSH per-connection server daemon (10.0.0.1:38520). May 16 09:42:12.705064 systemd-logind[1512]: Removed session 3. May 16 09:42:12.754651 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 38520 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:12.755866 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:12.760227 systemd-logind[1512]: New session 4 of user core. May 16 09:42:12.765884 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 09:42:12.817339 sshd[1680]: Connection closed by 10.0.0.1 port 38520 May 16 09:42:12.817613 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 16 09:42:12.827714 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:38520.service: Deactivated successfully. May 16 09:42:12.829987 systemd[1]: session-4.scope: Deactivated successfully. May 16 09:42:12.830795 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. May 16 09:42:12.833394 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:38524.service - OpenSSH per-connection server daemon (10.0.0.1:38524). May 16 09:42:12.834204 systemd-logind[1512]: Removed session 4. May 16 09:42:12.892962 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:12.894429 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:12.898677 systemd-logind[1512]: New session 5 of user core. May 16 09:42:12.908932 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 09:42:12.973700 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 09:42:12.975736 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:42:12.992219 sudo[1689]: pam_unix(sudo:session): session closed for user root May 16 09:42:12.994355 sshd[1688]: Connection closed by 10.0.0.1 port 38524 May 16 09:42:12.994170 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 16 09:42:13.007597 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:38524.service: Deactivated successfully. May 16 09:42:13.008981 systemd[1]: session-5.scope: Deactivated successfully. May 16 09:42:13.009585 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. May 16 09:42:13.011680 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:38538.service - OpenSSH per-connection server daemon (10.0.0.1:38538). May 16 09:42:13.012848 systemd-logind[1512]: Removed session 5. May 16 09:42:13.067852 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 38538 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:13.069017 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:13.073139 systemd-logind[1512]: New session 6 of user core. May 16 09:42:13.080953 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 09:42:13.132229 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 09:42:13.132860 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:42:13.140111 sudo[1699]: pam_unix(sudo:session): session closed for user root May 16 09:42:13.144987 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 09:42:13.145244 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:42:13.153550 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 09:42:13.186275 augenrules[1721]: No rules May 16 09:42:13.187367 systemd[1]: audit-rules.service: Deactivated successfully. May 16 09:42:13.187599 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 09:42:13.188566 sudo[1698]: pam_unix(sudo:session): session closed for user root May 16 09:42:13.189989 sshd[1697]: Connection closed by 10.0.0.1 port 38538 May 16 09:42:13.190453 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 16 09:42:13.204626 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:38538.service: Deactivated successfully. May 16 09:42:13.206308 systemd[1]: session-6.scope: Deactivated successfully. May 16 09:42:13.207058 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. May 16 09:42:13.209417 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). May 16 09:42:13.209841 systemd-logind[1512]: Removed session 6. May 16 09:42:13.269149 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:42:13.269578 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:42:13.275657 systemd-logind[1512]: New session 7 of user core. May 16 09:42:13.280880 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 09:42:13.332232 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 09:42:13.332507 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 09:42:13.732467 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 09:42:13.747087 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 09:42:14.004355 dockerd[1754]: time="2025-05-16T09:42:14.004249015Z" level=info msg="Starting up" May 16 09:42:14.005500 dockerd[1754]: time="2025-05-16T09:42:14.005467514Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 09:42:14.207784 dockerd[1754]: time="2025-05-16T09:42:14.207578763Z" level=info msg="Loading containers: start." May 16 09:42:14.218762 kernel: Initializing XFRM netlink socket May 16 09:42:14.395461 systemd-networkd[1464]: docker0: Link UP May 16 09:42:14.399983 dockerd[1754]: time="2025-05-16T09:42:14.399942506Z" level=info msg="Loading containers: done." May 16 09:42:14.415217 dockerd[1754]: time="2025-05-16T09:42:14.414880982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 09:42:14.415217 dockerd[1754]: time="2025-05-16T09:42:14.414966980Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 09:42:14.415217 dockerd[1754]: time="2025-05-16T09:42:14.415066227Z" level=info msg="Initializing buildkit" May 16 09:42:14.434015 dockerd[1754]: time="2025-05-16T09:42:14.433985673Z" level=info msg="Completed buildkit initialization" May 16 09:42:14.441500 dockerd[1754]: time="2025-05-16T09:42:14.441471213Z" level=info msg="Daemon has completed initialization" May 16 09:42:14.441663 dockerd[1754]: time="2025-05-16T09:42:14.441633552Z" level=info msg="API listen on /run/docker.sock" May 16 09:42:14.441701 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 09:42:15.188066 containerd[1522]: time="2025-05-16T09:42:15.188019517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 09:42:15.853082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140797712.mount: Deactivated successfully. May 16 09:42:17.159764 containerd[1522]: time="2025-05-16T09:42:17.159717416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:17.160406 containerd[1522]: time="2025-05-16T09:42:17.160362289Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 16 09:42:17.160963 containerd[1522]: time="2025-05-16T09:42:17.160940291Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:17.163601 containerd[1522]: time="2025-05-16T09:42:17.163572160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:17.164914 containerd[1522]: time="2025-05-16T09:42:17.164881707Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.976825389s" May 16 09:42:17.164963 containerd[1522]: time="2025-05-16T09:42:17.164915721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 16 09:42:17.165533 containerd[1522]: time="2025-05-16T09:42:17.165463821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 09:42:18.563449 containerd[1522]: time="2025-05-16T09:42:18.563401780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:18.564501 containerd[1522]: time="2025-05-16T09:42:18.564475034Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 16 09:42:18.565477 containerd[1522]: time="2025-05-16T09:42:18.565434694Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:18.568548 containerd[1522]: time="2025-05-16T09:42:18.568511715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:18.569441 containerd[1522]: time="2025-05-16T09:42:18.569411843Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.403921632s" May 16 09:42:18.569476 containerd[1522]: time="2025-05-16T09:42:18.569444424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 16 09:42:18.570010 containerd[1522]: time="2025-05-16T09:42:18.569835235Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 09:42:19.991699 containerd[1522]: time="2025-05-16T09:42:19.991636644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:19.992189 containerd[1522]: time="2025-05-16T09:42:19.992156281Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 16 09:42:19.992846 containerd[1522]: time="2025-05-16T09:42:19.992798869Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:19.995057 containerd[1522]: time="2025-05-16T09:42:19.995032033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:19.996023 containerd[1522]: time="2025-05-16T09:42:19.995964967Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.426104896s" May 16 09:42:19.996023 containerd[1522]: time="2025-05-16T09:42:19.995993958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 16 09:42:19.996555 containerd[1522]: time="2025-05-16T09:42:19.996523937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 09:42:21.069587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530632971.mount: Deactivated successfully. May 16 09:42:21.441692 containerd[1522]: time="2025-05-16T09:42:21.441570450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:21.442851 containerd[1522]: time="2025-05-16T09:42:21.442797897Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 16 09:42:21.443715 containerd[1522]: time="2025-05-16T09:42:21.443668437Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:21.445684 containerd[1522]: time="2025-05-16T09:42:21.445641490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:21.446154 containerd[1522]: time="2025-05-16T09:42:21.446012176Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.449453856s" May 16 09:42:21.446154 containerd[1522]: time="2025-05-16T09:42:21.446039535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 09:42:21.446465 containerd[1522]: time="2025-05-16T09:42:21.446438380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 09:42:21.956292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 09:42:21.957667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:22.095240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:22.099030 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 09:42:22.125239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763226241.mount: Deactivated successfully. May 16 09:42:22.143301 kubelet[2044]: E0516 09:42:22.143254 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 09:42:22.147099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 09:42:22.147327 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 09:42:22.147662 systemd[1]: kubelet.service: Consumed 138ms CPU time, 102.8M memory peak. May 16 09:42:23.132099 containerd[1522]: time="2025-05-16T09:42:23.132041873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:23.132588 containerd[1522]: time="2025-05-16T09:42:23.132550205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 16 09:42:23.135798 containerd[1522]: time="2025-05-16T09:42:23.135732309Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:23.138617 containerd[1522]: time="2025-05-16T09:42:23.138580611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:23.140294 containerd[1522]: time="2025-05-16T09:42:23.140257130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.693792149s" May 16 09:42:23.140368 containerd[1522]: time="2025-05-16T09:42:23.140312258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 09:42:23.140706 containerd[1522]: time="2025-05-16T09:42:23.140688124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 09:42:23.613727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564601875.mount: Deactivated successfully. May 16 09:42:23.618282 containerd[1522]: time="2025-05-16T09:42:23.618232316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:42:23.618703 containerd[1522]: time="2025-05-16T09:42:23.618661273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 09:42:23.619456 containerd[1522]: time="2025-05-16T09:42:23.619419757Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:42:23.621050 containerd[1522]: time="2025-05-16T09:42:23.621012986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 09:42:23.621629 containerd[1522]: time="2025-05-16T09:42:23.621593864Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 480.800477ms" May 16 09:42:23.621629 containerd[1522]: time="2025-05-16T09:42:23.621624503Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 09:42:23.622361 containerd[1522]: time="2025-05-16T09:42:23.622329258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 09:42:24.181574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568350267.mount: Deactivated successfully. May 16 09:42:26.899326 containerd[1522]: time="2025-05-16T09:42:26.899270236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:26.899812 containerd[1522]: time="2025-05-16T09:42:26.899775449Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 16 09:42:26.900911 containerd[1522]: time="2025-05-16T09:42:26.900862954Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:26.904243 containerd[1522]: time="2025-05-16T09:42:26.904201664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:26.905622 containerd[1522]: time="2025-05-16T09:42:26.905576497Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.28321536s" May 16 09:42:26.905660 containerd[1522]: time="2025-05-16T09:42:26.905622567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 16 09:42:31.777209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:31.777351 systemd[1]: kubelet.service: Consumed 138ms CPU time, 102.8M memory peak. May 16 09:42:31.779148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:31.798290 systemd[1]: Reload requested from client PID 2190 ('systemctl') (unit session-7.scope)... May 16 09:42:31.798306 systemd[1]: Reloading... May 16 09:42:31.860387 zram_generator::config[2234]: No configuration found. May 16 09:42:31.953048 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:42:32.035457 systemd[1]: Reloading finished in 236 ms. May 16 09:42:32.091181 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 09:42:32.091261 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 09:42:32.091481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:32.091524 systemd[1]: kubelet.service: Consumed 82ms CPU time, 90.2M memory peak. May 16 09:42:32.094984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:32.216565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:32.221015 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 09:42:32.257270 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:42:32.257270 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 09:42:32.257270 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:42:32.257604 kubelet[2279]: I0516 09:42:32.257327 2279 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 09:42:33.491800 kubelet[2279]: I0516 09:42:33.491760 2279 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 16 09:42:33.491800 kubelet[2279]: I0516 09:42:33.491792 2279 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 09:42:33.492180 kubelet[2279]: I0516 09:42:33.492071 2279 server.go:954] "Client rotation is on, will bootstrap in background" May 16 09:42:33.530054 kubelet[2279]: I0516 09:42:33.530012 2279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 09:42:33.530794 kubelet[2279]: E0516 09:42:33.530737 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:33.538551 kubelet[2279]: I0516 09:42:33.538459 2279 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 09:42:33.542155 kubelet[2279]: I0516 09:42:33.542089 2279 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 09:42:33.542365 kubelet[2279]: I0516 09:42:33.542320 2279 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 09:42:33.542524 kubelet[2279]: I0516 09:42:33.542352 2279 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 09:42:33.542610 kubelet[2279]: I0516 09:42:33.542584 2279 topology_manager.go:138] "Creating topology manager with none policy" May 16 09:42:33.542610 kubelet[2279]: I0516 09:42:33.542593 2279 container_manager_linux.go:304] "Creating device plugin manager" May 16 09:42:33.542828 kubelet[2279]: I0516 09:42:33.542794 2279 state_mem.go:36] "Initialized new in-memory state store" May 16 09:42:33.546990 kubelet[2279]: I0516 09:42:33.546952 2279 kubelet.go:446] "Attempting to sync node with API server" May 16 09:42:33.546990 kubelet[2279]: I0516 09:42:33.546985 2279 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 09:42:33.548089 kubelet[2279]: I0516 09:42:33.548061 2279 kubelet.go:352] "Adding apiserver pod source" May 16 09:42:33.548195 kubelet[2279]: I0516 09:42:33.548096 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 09:42:33.551561 kubelet[2279]: I0516 09:42:33.551431 2279 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 09:42:33.551561 kubelet[2279]: W0516 09:42:33.551467 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 16 09:42:33.551561 kubelet[2279]: E0516 09:42:33.551518 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:33.551836 kubelet[2279]: W0516 09:42:33.551784 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 16 09:42:33.551836 kubelet[2279]: E0516 09:42:33.551831 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:33.553206 kubelet[2279]: I0516 09:42:33.553186 2279 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 09:42:33.553408 kubelet[2279]: W0516 09:42:33.553396 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 09:42:33.554375 kubelet[2279]: I0516 09:42:33.554346 2279 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 09:42:33.554449 kubelet[2279]: I0516 09:42:33.554389 2279 server.go:1287] "Started kubelet" May 16 09:42:33.555907 kubelet[2279]: I0516 09:42:33.555868 2279 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 09:42:33.562287 kubelet[2279]: I0516 09:42:33.559669 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 09:42:33.562287 kubelet[2279]: I0516 09:42:33.559934 2279 server.go:490] "Adding debug handlers to kubelet server" May 16 09:42:33.562287 kubelet[2279]: I0516 09:42:33.559995 2279 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 09:42:33.562287 kubelet[2279]: I0516 09:42:33.560339 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 09:42:33.563439 kubelet[2279]: I0516 09:42:33.563414 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 09:42:33.565205 kubelet[2279]: E0516 09:42:33.565174 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:33.565310 kubelet[2279]: I0516 09:42:33.565298 2279 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 09:42:33.565445 kubelet[2279]: E0516 09:42:33.565404 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" May 16 09:42:33.565670 kubelet[2279]: I0516 09:42:33.565648 2279 reconciler.go:26] "Reconciler: start to sync state" May 16 09:42:33.565670 kubelet[2279]: E0516 09:42:33.565419 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ff8a1a3310e30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 09:42:33.554366 +0000 UTC m=+1.330001719,LastTimestamp:2025-05-16 09:42:33.554366 +0000 UTC m=+1.330001719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 09:42:33.565797 kubelet[2279]: I0516 09:42:33.565680 2279 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 16 09:42:33.565846 kubelet[2279]: E0516 09:42:33.565823 2279 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 09:42:33.566236 kubelet[2279]: I0516 09:42:33.566207 2279 factory.go:221] Registration of the systemd container factory successfully May 16 09:42:33.566339 kubelet[2279]: I0516 09:42:33.566317 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 09:42:33.566777 kubelet[2279]: W0516 09:42:33.566708 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 16 09:42:33.566895 kubelet[2279]: E0516 09:42:33.566866 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:33.567131 kubelet[2279]: I0516 09:42:33.567112 2279 factory.go:221] Registration of the containerd container factory successfully May 16 09:42:33.574928 kubelet[2279]: I0516 09:42:33.574891 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 09:42:33.576004 kubelet[2279]: I0516 09:42:33.575963 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 09:42:33.576004 kubelet[2279]: I0516 09:42:33.575991 2279 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 09:42:33.576095 kubelet[2279]: I0516 09:42:33.576014 2279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 09:42:33.576095 kubelet[2279]: I0516 09:42:33.576021 2279 kubelet.go:2388] "Starting kubelet main sync loop" May 16 09:42:33.576095 kubelet[2279]: E0516 09:42:33.576063 2279 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 09:42:33.580788 kubelet[2279]: W0516 09:42:33.580725 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 16 09:42:33.580845 kubelet[2279]: E0516 09:42:33.580796 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:33.581517 kubelet[2279]: I0516 09:42:33.581489 2279 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 09:42:33.581517 kubelet[2279]: I0516 09:42:33.581503 2279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 09:42:33.581517 kubelet[2279]: I0516 09:42:33.581520 2279 state_mem.go:36] "Initialized new in-memory state store" May 16 09:42:33.666033 kubelet[2279]: E0516 09:42:33.665991 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:33.676153 kubelet[2279]: E0516 09:42:33.676115 2279 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 09:42:33.698439 kubelet[2279]: I0516 09:42:33.698407 2279 policy_none.go:49] "None policy: Start" May 16 09:42:33.698439 kubelet[2279]: I0516 09:42:33.698437 2279 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 09:42:33.698516 kubelet[2279]: I0516 09:42:33.698449 2279 state_mem.go:35] "Initializing new in-memory state store" May 16 09:42:33.703322 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 09:42:33.715407 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 09:42:33.718175 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 09:42:33.738642 kubelet[2279]: I0516 09:42:33.738606 2279 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 09:42:33.738869 kubelet[2279]: I0516 09:42:33.738842 2279 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 09:42:33.738958 kubelet[2279]: I0516 09:42:33.738860 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 09:42:33.739156 kubelet[2279]: I0516 09:42:33.739075 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 09:42:33.740066 kubelet[2279]: E0516 09:42:33.740045 2279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 09:42:33.740156 kubelet[2279]: E0516 09:42:33.740144 2279 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 09:42:33.766826 kubelet[2279]: E0516 09:42:33.766718 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" May 16 09:42:33.840809 kubelet[2279]: I0516 09:42:33.840761 2279 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 09:42:33.841226 kubelet[2279]: E0516 09:42:33.841187 2279 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 16 09:42:33.884722 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 09:42:33.908074 kubelet[2279]: E0516 09:42:33.908043 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:33.912398 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 09:42:33.914178 kubelet[2279]: E0516 09:42:33.914142 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:33.916437 systemd[1]: Created slice kubepods-burstable-poda2a1cc28c54e9de309f9835eabbb0043.slice - libcontainer container kubepods-burstable-poda2a1cc28c54e9de309f9835eabbb0043.slice. May 16 09:42:33.918094 kubelet[2279]: E0516 09:42:33.918071 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:33.967539 kubelet[2279]: I0516 09:42:33.967497 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:33.967611 kubelet[2279]: I0516 09:42:33.967540 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:33.967611 kubelet[2279]: I0516 09:42:33.967575 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:33.967611 kubelet[2279]: I0516 09:42:33.967602 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 09:42:33.967705 kubelet[2279]: I0516 09:42:33.967618 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:33.967705 kubelet[2279]: I0516 09:42:33.967634 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:33.967705 kubelet[2279]: I0516 09:42:33.967648 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:33.967705 kubelet[2279]: I0516 09:42:33.967663 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:33.967705 kubelet[2279]: I0516 09:42:33.967678 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:34.042866 kubelet[2279]: I0516 09:42:34.042781 2279 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 09:42:34.043161 kubelet[2279]: E0516 09:42:34.043127 2279 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 16 09:42:34.167548 kubelet[2279]: E0516 09:42:34.167506 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" May 16 09:42:34.208895 kubelet[2279]: E0516 09:42:34.208857 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.209495 containerd[1522]: time="2025-05-16T09:42:34.209460092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 09:42:34.214769 kubelet[2279]: E0516 09:42:34.214711 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.215304 containerd[1522]: time="2025-05-16T09:42:34.215269585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 09:42:34.218640 kubelet[2279]: E0516 09:42:34.218559 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.219080 containerd[1522]: time="2025-05-16T09:42:34.219037675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2a1cc28c54e9de309f9835eabbb0043,Namespace:kube-system,Attempt:0,}" May 16 09:42:34.231762 containerd[1522]: time="2025-05-16T09:42:34.231694033Z" level=info msg="connecting to shim 60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b" address="unix:///run/containerd/s/72296b190a4cf94a3b71232fdca5f3307dcb86e612af1c058c184693cc9b038c" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:34.246198 containerd[1522]: time="2025-05-16T09:42:34.246112175Z" level=info msg="connecting to shim 57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0" address="unix:///run/containerd/s/678c5d4e4cf551d1fcd722d607c74c38adbac2798b558f6b410d12b87f55bae4" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:34.255276 containerd[1522]: time="2025-05-16T09:42:34.255224019Z" level=info msg="connecting to shim 5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91" address="unix:///run/containerd/s/1a53d694e826fee870b57179b53398c1c1daa64d4fee4147e9053401f523813f" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:34.268947 systemd[1]: Started cri-containerd-60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b.scope - libcontainer container 60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b. May 16 09:42:34.272733 systemd[1]: Started cri-containerd-57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0.scope - libcontainer container 57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0. May 16 09:42:34.280954 systemd[1]: Started cri-containerd-5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91.scope - libcontainer container 5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91. May 16 09:42:34.312200 containerd[1522]: time="2025-05-16T09:42:34.311524591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b\"" May 16 09:42:34.313989 kubelet[2279]: E0516 09:42:34.313957 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.317164 containerd[1522]: time="2025-05-16T09:42:34.316802467Z" level=info msg="CreateContainer within sandbox \"60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 09:42:34.320176 containerd[1522]: time="2025-05-16T09:42:34.320130961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0\"" May 16 09:42:34.320954 kubelet[2279]: E0516 09:42:34.320762 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.324039 containerd[1522]: time="2025-05-16T09:42:34.323992150Z" level=info msg="CreateContainer within sandbox \"57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 09:42:34.324982 containerd[1522]: time="2025-05-16T09:42:34.324957768Z" level=info msg="Container f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:34.333687 containerd[1522]: time="2025-05-16T09:42:34.333646085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2a1cc28c54e9de309f9835eabbb0043,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91\"" May 16 09:42:34.334403 kubelet[2279]: E0516 09:42:34.334378 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.336233 containerd[1522]: time="2025-05-16T09:42:34.336164980Z" level=info msg="CreateContainer within sandbox \"5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 09:42:34.337376 containerd[1522]: time="2025-05-16T09:42:34.337340662Z" level=info msg="Container a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:34.337847 containerd[1522]: time="2025-05-16T09:42:34.337818874Z" level=info msg="CreateContainer within sandbox \"60fd093a72791ab2a70e80ce657d5bd33a3fcc9d1c2c007a2b94920524d5b32b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187\"" May 16 09:42:34.338437 containerd[1522]: time="2025-05-16T09:42:34.338409573Z" level=info msg="StartContainer for \"f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187\"" May 16 09:42:34.339473 containerd[1522]: time="2025-05-16T09:42:34.339449022Z" level=info msg="connecting to shim f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187" address="unix:///run/containerd/s/72296b190a4cf94a3b71232fdca5f3307dcb86e612af1c058c184693cc9b038c" protocol=ttrpc version=3 May 16 09:42:34.342766 containerd[1522]: time="2025-05-16T09:42:34.342712198Z" level=info msg="CreateContainer within sandbox \"57006e433c6841c3874ea36bfcb1468ad746028740d8a8c58f2931d0dafc4eb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774\"" May 16 09:42:34.343221 containerd[1522]: time="2025-05-16T09:42:34.343195366Z" level=info msg="StartContainer for \"a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774\"" May 16 09:42:34.344898 containerd[1522]: time="2025-05-16T09:42:34.344871046Z" level=info msg="connecting to shim a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774" address="unix:///run/containerd/s/678c5d4e4cf551d1fcd722d607c74c38adbac2798b558f6b410d12b87f55bae4" protocol=ttrpc version=3 May 16 09:42:34.345528 containerd[1522]: time="2025-05-16T09:42:34.345488408Z" level=info msg="Container a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:34.351471 containerd[1522]: time="2025-05-16T09:42:34.351429656Z" level=info msg="CreateContainer within sandbox \"5c4d5b429b676848a1003cc7846efb836983a1b6bc70f12bf4b815913f515a91\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881\"" May 16 09:42:34.352119 containerd[1522]: time="2025-05-16T09:42:34.352094028Z" level=info msg="StartContainer for \"a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881\"" May 16 09:42:34.353151 containerd[1522]: time="2025-05-16T09:42:34.353120766Z" level=info msg="connecting to shim a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881" address="unix:///run/containerd/s/1a53d694e826fee870b57179b53398c1c1daa64d4fee4147e9053401f523813f" protocol=ttrpc version=3 May 16 09:42:34.358949 systemd[1]: Started cri-containerd-f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187.scope - libcontainer container f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187. May 16 09:42:34.362594 systemd[1]: Started cri-containerd-a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774.scope - libcontainer container a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774. May 16 09:42:34.380959 systemd[1]: Started cri-containerd-a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881.scope - libcontainer container a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881. May 16 09:42:34.412514 kubelet[2279]: W0516 09:42:34.412459 2279 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused May 16 09:42:34.412607 kubelet[2279]: E0516 09:42:34.412520 2279 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" May 16 09:42:34.423050 kubelet[2279]: E0516 09:42:34.419665 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ff8a1a3310e30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 09:42:33.554366 +0000 UTC m=+1.330001719,LastTimestamp:2025-05-16 09:42:33.554366 +0000 UTC m=+1.330001719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 09:42:34.451899 kubelet[2279]: I0516 09:42:34.450296 2279 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 09:42:34.451899 kubelet[2279]: E0516 09:42:34.450677 2279 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" May 16 09:42:34.482053 containerd[1522]: time="2025-05-16T09:42:34.481728347Z" level=info msg="StartContainer for \"a522b993b66fb349ffe011f55869a457917cbbd31090a4f9e227f5bc01405881\" returns successfully" May 16 09:42:34.482053 containerd[1522]: time="2025-05-16T09:42:34.481916866Z" level=info msg="StartContainer for \"f82d26e7da7d60b7a28b55005d6bee4cb73d822ccce32f5987f54ff7b06d9187\" returns successfully" May 16 09:42:34.483474 containerd[1522]: time="2025-05-16T09:42:34.483438564Z" level=info msg="StartContainer for \"a0d80af2c109b1ce210868b11b6040b06a7054d7d41f4fc3137b536cf2204774\" returns successfully" May 16 09:42:34.590613 kubelet[2279]: E0516 09:42:34.590428 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:34.590913 kubelet[2279]: E0516 09:42:34.590718 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.594558 kubelet[2279]: E0516 09:42:34.594529 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:34.594655 kubelet[2279]: E0516 09:42:34.594639 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:34.598103 kubelet[2279]: E0516 09:42:34.598070 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:34.598204 kubelet[2279]: E0516 09:42:34.598168 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:35.252703 kubelet[2279]: I0516 09:42:35.251985 2279 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 09:42:35.601033 kubelet[2279]: E0516 09:42:35.600930 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:35.601303 kubelet[2279]: E0516 09:42:35.601063 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:35.601428 kubelet[2279]: E0516 09:42:35.601412 2279 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 09:42:35.601592 kubelet[2279]: E0516 09:42:35.601577 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:35.797274 kubelet[2279]: E0516 09:42:35.797216 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 09:42:35.841257 kubelet[2279]: I0516 09:42:35.841059 2279 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 16 09:42:35.841257 kubelet[2279]: E0516 09:42:35.841096 2279 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 09:42:35.847767 kubelet[2279]: E0516 09:42:35.847725 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:35.948162 kubelet[2279]: E0516 09:42:35.948052 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.049218 kubelet[2279]: E0516 09:42:36.049172 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.150107 kubelet[2279]: E0516 09:42:36.150067 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.250712 kubelet[2279]: E0516 09:42:36.250681 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.351379 kubelet[2279]: E0516 09:42:36.351331 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.451936 kubelet[2279]: E0516 09:42:36.451902 2279 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:36.552492 kubelet[2279]: I0516 09:42:36.552261 2279 apiserver.go:52] "Watching apiserver" May 16 09:42:36.565117 kubelet[2279]: I0516 09:42:36.565084 2279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 09:42:36.566398 kubelet[2279]: I0516 09:42:36.566342 2279 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 16 09:42:36.570798 kubelet[2279]: E0516 09:42:36.570763 2279 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 09:42:36.570798 kubelet[2279]: I0516 09:42:36.570791 2279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 09:42:36.572536 kubelet[2279]: E0516 09:42:36.572335 2279 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 09:42:36.572536 kubelet[2279]: I0516 09:42:36.572359 2279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 09:42:36.573966 kubelet[2279]: E0516 09:42:36.573937 2279 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 09:42:37.656711 kubelet[2279]: I0516 09:42:37.656676 2279 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 09:42:37.661768 kubelet[2279]: E0516 09:42:37.661721 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:37.696415 systemd[1]: Reload requested from client PID 2555 ('systemctl') (unit session-7.scope)... May 16 09:42:37.696430 systemd[1]: Reloading... May 16 09:42:37.762776 zram_generator::config[2598]: No configuration found. May 16 09:42:37.832274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 09:42:37.927217 systemd[1]: Reloading finished in 230 ms. May 16 09:42:37.946827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:37.959683 systemd[1]: kubelet.service: Deactivated successfully. May 16 09:42:37.960841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:37.960903 systemd[1]: kubelet.service: Consumed 1.698s CPU time, 123.5M memory peak. May 16 09:42:37.962634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 09:42:38.113871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 09:42:38.129064 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 09:42:38.170559 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:42:38.171805 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 09:42:38.171805 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 09:42:38.171805 kubelet[2640]: I0516 09:42:38.171024 2640 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 09:42:38.176203 kubelet[2640]: I0516 09:42:38.176161 2640 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 16 09:42:38.176203 kubelet[2640]: I0516 09:42:38.176191 2640 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 09:42:38.176464 kubelet[2640]: I0516 09:42:38.176438 2640 server.go:954] "Client rotation is on, will bootstrap in background" May 16 09:42:38.177713 kubelet[2640]: I0516 09:42:38.177646 2640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 09:42:38.180332 kubelet[2640]: I0516 09:42:38.180306 2640 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 09:42:38.185293 kubelet[2640]: I0516 09:42:38.185254 2640 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 09:42:38.187835 kubelet[2640]: I0516 09:42:38.187813 2640 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 09:42:38.188024 kubelet[2640]: I0516 09:42:38.187992 2640 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 09:42:38.188185 kubelet[2640]: I0516 09:42:38.188019 2640 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 09:42:38.188264 kubelet[2640]: I0516 09:42:38.188189 2640 topology_manager.go:138] "Creating topology manager with none policy" May 16 09:42:38.188264 kubelet[2640]: I0516 09:42:38.188198 2640 container_manager_linux.go:304] "Creating device plugin manager" May 16 09:42:38.188264 kubelet[2640]: I0516 09:42:38.188240 2640 state_mem.go:36] "Initialized new in-memory state store" May 16 09:42:38.188374 kubelet[2640]: I0516 09:42:38.188361 2640 kubelet.go:446] "Attempting to sync node with API server" May 16 09:42:38.188398 kubelet[2640]: I0516 09:42:38.188375 2640 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 09:42:38.188398 kubelet[2640]: I0516 09:42:38.188396 2640 kubelet.go:352] "Adding apiserver pod source" May 16 09:42:38.188442 kubelet[2640]: I0516 09:42:38.188406 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 09:42:38.189425 kubelet[2640]: I0516 09:42:38.188945 2640 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 09:42:38.189425 kubelet[2640]: I0516 09:42:38.189369 2640 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 09:42:38.189857 kubelet[2640]: I0516 09:42:38.189822 2640 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 09:42:38.189990 kubelet[2640]: I0516 09:42:38.189973 2640 server.go:1287] "Started kubelet" May 16 09:42:38.190393 kubelet[2640]: I0516 09:42:38.190228 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 09:42:38.190524 kubelet[2640]: I0516 09:42:38.190503 2640 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 09:42:38.190582 kubelet[2640]: I0516 09:42:38.190562 2640 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 09:42:38.191443 kubelet[2640]: I0516 09:42:38.191427 2640 server.go:490] "Adding debug handlers to kubelet server" May 16 09:42:38.191530 kubelet[2640]: I0516 09:42:38.191512 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 09:42:38.191715 kubelet[2640]: I0516 09:42:38.191701 2640 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 09:42:38.191907 kubelet[2640]: I0516 09:42:38.191889 2640 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 09:42:38.192874 kubelet[2640]: E0516 09:42:38.192846 2640 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 09:42:38.193356 kubelet[2640]: I0516 09:42:38.193331 2640 reconciler.go:26] "Reconciler: start to sync state" May 16 09:42:38.193356 kubelet[2640]: I0516 09:42:38.193357 2640 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 16 09:42:38.196355 kubelet[2640]: I0516 09:42:38.196337 2640 factory.go:221] Registration of the systemd container factory successfully May 16 09:42:38.196530 kubelet[2640]: I0516 09:42:38.196510 2640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 09:42:38.198197 kubelet[2640]: E0516 09:42:38.198056 2640 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 09:42:38.198527 kubelet[2640]: I0516 09:42:38.198510 2640 factory.go:221] Registration of the containerd container factory successfully May 16 09:42:38.201789 kubelet[2640]: I0516 09:42:38.201275 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 09:42:38.202707 kubelet[2640]: I0516 09:42:38.202687 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 09:42:38.202803 kubelet[2640]: I0516 09:42:38.202792 2640 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 09:42:38.202907 kubelet[2640]: I0516 09:42:38.202891 2640 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 09:42:38.202962 kubelet[2640]: I0516 09:42:38.202952 2640 kubelet.go:2388] "Starting kubelet main sync loop" May 16 09:42:38.203050 kubelet[2640]: E0516 09:42:38.203029 2640 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 09:42:38.248539 kubelet[2640]: I0516 09:42:38.248511 2640 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 09:42:38.248539 kubelet[2640]: I0516 09:42:38.248531 2640 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 09:42:38.248672 kubelet[2640]: I0516 09:42:38.248550 2640 state_mem.go:36] "Initialized new in-memory state store" May 16 09:42:38.248725 kubelet[2640]: I0516 09:42:38.248707 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 09:42:38.248761 kubelet[2640]: I0516 09:42:38.248723 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 09:42:38.248795 kubelet[2640]: I0516 09:42:38.248763 2640 policy_none.go:49] "None policy: Start" May 16 09:42:38.248795 kubelet[2640]: I0516 09:42:38.248773 2640 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 09:42:38.248795 kubelet[2640]: I0516 09:42:38.248782 2640 state_mem.go:35] "Initializing new in-memory state store" May 16 09:42:38.248888 kubelet[2640]: I0516 09:42:38.248875 2640 state_mem.go:75] "Updated machine memory state" May 16 09:42:38.252831 kubelet[2640]: I0516 09:42:38.252803 2640 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 09:42:38.252973 kubelet[2640]: I0516 09:42:38.252949 2640 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 09:42:38.253005 kubelet[2640]: I0516 09:42:38.252967 2640 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 09:42:38.253518 kubelet[2640]: I0516 09:42:38.253498 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 09:42:38.254481 kubelet[2640]: E0516 09:42:38.254287 2640 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 09:42:38.304092 kubelet[2640]: I0516 09:42:38.304028 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 09:42:38.304092 kubelet[2640]: I0516 09:42:38.304075 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 09:42:38.304232 kubelet[2640]: I0516 09:42:38.304157 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.308961 kubelet[2640]: E0516 09:42:38.308928 2640 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 09:42:38.356352 kubelet[2640]: I0516 09:42:38.356318 2640 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 16 09:42:38.362249 kubelet[2640]: I0516 09:42:38.362219 2640 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 16 09:42:38.362343 kubelet[2640]: I0516 09:42:38.362306 2640 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 16 09:42:38.395363 kubelet[2640]: I0516 09:42:38.395316 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:38.395363 kubelet[2640]: I0516 09:42:38.395358 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:38.395498 kubelet[2640]: I0516 09:42:38.395378 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.395498 kubelet[2640]: I0516 09:42:38.395406 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.395498 kubelet[2640]: I0516 09:42:38.395423 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2a1cc28c54e9de309f9835eabbb0043-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2a1cc28c54e9de309f9835eabbb0043\") " pod="kube-system/kube-apiserver-localhost" May 16 09:42:38.395498 kubelet[2640]: I0516 09:42:38.395436 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.395498 kubelet[2640]: I0516 09:42:38.395453 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.395640 kubelet[2640]: I0516 09:42:38.395469 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 09:42:38.395640 kubelet[2640]: I0516 09:42:38.395484 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 09:42:38.609342 kubelet[2640]: E0516 09:42:38.609300 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:38.611172 kubelet[2640]: E0516 09:42:38.611122 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:38.611275 kubelet[2640]: E0516 09:42:38.611228 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:39.188705 kubelet[2640]: I0516 09:42:39.188664 2640 apiserver.go:52] "Watching apiserver" May 16 09:42:39.193808 kubelet[2640]: I0516 09:42:39.193728 2640 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 16 09:42:39.237784 kubelet[2640]: E0516 09:42:39.237393 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:39.237784 kubelet[2640]: E0516 09:42:39.237777 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:39.237928 kubelet[2640]: I0516 09:42:39.237886 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 09:42:39.243012 kubelet[2640]: E0516 09:42:39.242867 2640 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 09:42:39.243757 kubelet[2640]: E0516 09:42:39.243549 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:39.261936 kubelet[2640]: I0516 09:42:39.261873 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.261836524 podStartE2EDuration="2.261836524s" podCreationTimestamp="2025-05-16 09:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:42:39.261046612 +0000 UTC m=+1.128774323" watchObservedRunningTime="2025-05-16 09:42:39.261836524 +0000 UTC m=+1.129564235" May 16 09:42:39.284042 kubelet[2640]: I0516 09:42:39.283947 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.283929088 podStartE2EDuration="1.283929088s" podCreationTimestamp="2025-05-16 09:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:42:39.274171924 +0000 UTC m=+1.141899635" watchObservedRunningTime="2025-05-16 09:42:39.283929088 +0000 UTC m=+1.151656759" May 16 09:42:39.308509 kubelet[2640]: I0516 09:42:39.308450 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.308432606 podStartE2EDuration="1.308432606s" podCreationTimestamp="2025-05-16 09:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:42:39.284039276 +0000 UTC m=+1.151766987" watchObservedRunningTime="2025-05-16 09:42:39.308432606 +0000 UTC m=+1.176160357" May 16 09:42:40.240594 kubelet[2640]: E0516 09:42:40.240555 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:40.240962 kubelet[2640]: E0516 09:42:40.240629 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:40.453297 kubelet[2640]: E0516 09:42:40.453263 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:42.796733 sudo[1733]: pam_unix(sudo:session): session closed for user root May 16 09:42:42.797833 sshd[1732]: Connection closed by 10.0.0.1 port 38546 May 16 09:42:42.798370 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 16 09:42:42.804563 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:38546.service: Deactivated successfully. May 16 09:42:42.806626 systemd[1]: session-7.scope: Deactivated successfully. May 16 09:42:42.807837 systemd[1]: session-7.scope: Consumed 6.625s CPU time, 231.6M memory peak. May 16 09:42:42.808950 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. May 16 09:42:42.811343 systemd-logind[1512]: Removed session 7. May 16 09:42:43.608954 kubelet[2640]: I0516 09:42:43.608915 2640 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 09:42:43.614473 containerd[1522]: time="2025-05-16T09:42:43.614437390Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 09:42:43.614779 kubelet[2640]: I0516 09:42:43.614665 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 09:42:44.468356 kubelet[2640]: E0516 09:42:44.468323 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:44.583694 systemd[1]: Created slice kubepods-besteffort-pod418735e3_d620_4c55_bfa3_58f39ef246e0.slice - libcontainer container kubepods-besteffort-pod418735e3_d620_4c55_bfa3_58f39ef246e0.slice. May 16 09:42:44.640301 kubelet[2640]: I0516 09:42:44.640257 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/418735e3-d620-4c55-bfa3-58f39ef246e0-lib-modules\") pod \"kube-proxy-9gvlz\" (UID: \"418735e3-d620-4c55-bfa3-58f39ef246e0\") " pod="kube-system/kube-proxy-9gvlz" May 16 09:42:44.640301 kubelet[2640]: I0516 09:42:44.640294 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7sz\" (UniqueName: \"kubernetes.io/projected/418735e3-d620-4c55-bfa3-58f39ef246e0-kube-api-access-kf7sz\") pod \"kube-proxy-9gvlz\" (UID: \"418735e3-d620-4c55-bfa3-58f39ef246e0\") " pod="kube-system/kube-proxy-9gvlz" May 16 09:42:44.640301 kubelet[2640]: I0516 09:42:44.640314 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/418735e3-d620-4c55-bfa3-58f39ef246e0-kube-proxy\") pod \"kube-proxy-9gvlz\" (UID: \"418735e3-d620-4c55-bfa3-58f39ef246e0\") " pod="kube-system/kube-proxy-9gvlz" May 16 09:42:44.640680 kubelet[2640]: I0516 09:42:44.640331 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/418735e3-d620-4c55-bfa3-58f39ef246e0-xtables-lock\") pod \"kube-proxy-9gvlz\" (UID: \"418735e3-d620-4c55-bfa3-58f39ef246e0\") " pod="kube-system/kube-proxy-9gvlz" May 16 09:42:44.690056 systemd[1]: Created slice kubepods-besteffort-pod1d17556d_ad30_4b8e_afc0_6a43696db075.slice - libcontainer container kubepods-besteffort-pod1d17556d_ad30_4b8e_afc0_6a43696db075.slice. May 16 09:42:44.741416 kubelet[2640]: I0516 09:42:44.741375 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d17556d-ad30-4b8e-afc0-6a43696db075-var-lib-calico\") pod \"tigera-operator-789496d6f5-nw69h\" (UID: \"1d17556d-ad30-4b8e-afc0-6a43696db075\") " pod="tigera-operator/tigera-operator-789496d6f5-nw69h" May 16 09:42:44.741416 kubelet[2640]: I0516 09:42:44.741421 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdj9\" (UniqueName: \"kubernetes.io/projected/1d17556d-ad30-4b8e-afc0-6a43696db075-kube-api-access-rrdj9\") pod \"tigera-operator-789496d6f5-nw69h\" (UID: \"1d17556d-ad30-4b8e-afc0-6a43696db075\") " pod="tigera-operator/tigera-operator-789496d6f5-nw69h" May 16 09:42:44.902814 kubelet[2640]: E0516 09:42:44.902771 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:44.910635 containerd[1522]: time="2025-05-16T09:42:44.910577508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gvlz,Uid:418735e3-d620-4c55-bfa3-58f39ef246e0,Namespace:kube-system,Attempt:0,}" May 16 09:42:44.924314 containerd[1522]: time="2025-05-16T09:42:44.924222495Z" level=info msg="connecting to shim c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1" address="unix:///run/containerd/s/296dd5392471a0c49f8589499bb71bbbf97375d9c4169d06b851652828b844f7" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:44.948916 systemd[1]: Started cri-containerd-c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1.scope - libcontainer container c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1. May 16 09:42:44.969405 containerd[1522]: time="2025-05-16T09:42:44.969351476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gvlz,Uid:418735e3-d620-4c55-bfa3-58f39ef246e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1\"" May 16 09:42:44.970091 kubelet[2640]: E0516 09:42:44.970065 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:44.980201 containerd[1522]: time="2025-05-16T09:42:44.980164100Z" level=info msg="CreateContainer within sandbox \"c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 09:42:44.989213 containerd[1522]: time="2025-05-16T09:42:44.988152639Z" level=info msg="Container f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:44.994908 containerd[1522]: time="2025-05-16T09:42:44.994810748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-nw69h,Uid:1d17556d-ad30-4b8e-afc0-6a43696db075,Namespace:tigera-operator,Attempt:0,}" May 16 09:42:45.000852 containerd[1522]: time="2025-05-16T09:42:45.000820197Z" level=info msg="CreateContainer within sandbox \"c755070e275168e07dde22de0e83837c40fa5186b9dd0dd326cffeafec5e7ef1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b\"" May 16 09:42:45.001389 containerd[1522]: time="2025-05-16T09:42:45.001365385Z" level=info msg="StartContainer for \"f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b\"" May 16 09:42:45.002917 containerd[1522]: time="2025-05-16T09:42:45.002892261Z" level=info msg="connecting to shim f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b" address="unix:///run/containerd/s/296dd5392471a0c49f8589499bb71bbbf97375d9c4169d06b851652828b844f7" protocol=ttrpc version=3 May 16 09:42:45.019367 containerd[1522]: time="2025-05-16T09:42:45.019308577Z" level=info msg="connecting to shim ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67" address="unix:///run/containerd/s/dd02a8ebf19366ed520f56e3f7bdd9573d714ee60997180d481e0906ef4d4c11" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:45.023115 systemd[1]: Started cri-containerd-f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b.scope - libcontainer container f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b. May 16 09:42:45.043914 systemd[1]: Started cri-containerd-ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67.scope - libcontainer container ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67. May 16 09:42:45.067458 containerd[1522]: time="2025-05-16T09:42:45.067402932Z" level=info msg="StartContainer for \"f6e27decfceae7d3a6e87c967603b14aaad9b30b86f615a64907eae1150d110b\" returns successfully" May 16 09:42:45.084061 containerd[1522]: time="2025-05-16T09:42:45.084001631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-nw69h,Uid:1d17556d-ad30-4b8e-afc0-6a43696db075,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67\"" May 16 09:42:45.085796 containerd[1522]: time="2025-05-16T09:42:45.085768471Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 16 09:42:45.253035 kubelet[2640]: E0516 09:42:45.252920 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:45.254905 kubelet[2640]: E0516 09:42:45.254722 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:45.271585 kubelet[2640]: I0516 09:42:45.271517 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9gvlz" podStartSLOduration=1.271500398 podStartE2EDuration="1.271500398s" podCreationTimestamp="2025-05-16 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:42:45.262901323 +0000 UTC m=+7.130629034" watchObservedRunningTime="2025-05-16 09:42:45.271500398 +0000 UTC m=+7.139228108" May 16 09:42:45.761173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660360907.mount: Deactivated successfully. May 16 09:42:46.262967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816224228.mount: Deactivated successfully. May 16 09:42:46.568712 kubelet[2640]: E0516 09:42:46.568593 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:46.871718 containerd[1522]: time="2025-05-16T09:42:46.871589861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:46.872346 containerd[1522]: time="2025-05-16T09:42:46.872313366Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 16 09:42:46.872958 containerd[1522]: time="2025-05-16T09:42:46.872931662Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:46.874877 containerd[1522]: time="2025-05-16T09:42:46.874843454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:46.876216 containerd[1522]: time="2025-05-16T09:42:46.876184895Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.790345447s" May 16 09:42:46.876272 containerd[1522]: time="2025-05-16T09:42:46.876219005Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 16 09:42:46.879532 containerd[1522]: time="2025-05-16T09:42:46.879413816Z" level=info msg="CreateContainer within sandbox \"ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 09:42:46.885696 containerd[1522]: time="2025-05-16T09:42:46.885035585Z" level=info msg="Container 8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:46.888277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175628424.mount: Deactivated successfully. May 16 09:42:46.893216 containerd[1522]: time="2025-05-16T09:42:46.893155452Z" level=info msg="CreateContainer within sandbox \"ea472c2df34b7ce2e5b7788033e7e1776bcd96b424b8a64a1ac4a1a6085f5a67\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e\"" May 16 09:42:46.893531 containerd[1522]: time="2025-05-16T09:42:46.893497111Z" level=info msg="StartContainer for \"8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e\"" May 16 09:42:46.895115 containerd[1522]: time="2025-05-16T09:42:46.895077521Z" level=info msg="connecting to shim 8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e" address="unix:///run/containerd/s/dd02a8ebf19366ed520f56e3f7bdd9573d714ee60997180d481e0906ef4d4c11" protocol=ttrpc version=3 May 16 09:42:46.914992 systemd[1]: Started cri-containerd-8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e.scope - libcontainer container 8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e. May 16 09:42:46.969699 containerd[1522]: time="2025-05-16T09:42:46.969660598Z" level=info msg="StartContainer for \"8a8df857ef483d964512ab19b12bef0949603607be93f31520df85017e1e373e\" returns successfully" May 16 09:42:47.260780 kubelet[2640]: E0516 09:42:47.260118 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:47.383566 kubelet[2640]: I0516 09:42:47.383428 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-nw69h" podStartSLOduration=1.5905933970000001 podStartE2EDuration="3.383409818s" podCreationTimestamp="2025-05-16 09:42:44 +0000 UTC" firstStartedPulling="2025-05-16 09:42:45.085204569 +0000 UTC m=+6.952932240" lastFinishedPulling="2025-05-16 09:42:46.87802095 +0000 UTC m=+8.745748661" observedRunningTime="2025-05-16 09:42:47.284637694 +0000 UTC m=+9.152365405" watchObservedRunningTime="2025-05-16 09:42:47.383409818 +0000 UTC m=+9.251137529" May 16 09:42:48.261489 kubelet[2640]: E0516 09:42:48.261447 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:50.462448 kubelet[2640]: E0516 09:42:50.462416 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:51.472203 systemd[1]: Created slice kubepods-besteffort-podaf33d0dc_fbd1_4a3d_a3b0_732f99bfe2ba.slice - libcontainer container kubepods-besteffort-podaf33d0dc_fbd1_4a3d_a3b0_732f99bfe2ba.slice. May 16 09:42:51.491502 kubelet[2640]: I0516 09:42:51.491397 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba-tigera-ca-bundle\") pod \"calico-typha-cf5fbcd59-tjjd2\" (UID: \"af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba\") " pod="calico-system/calico-typha-cf5fbcd59-tjjd2" May 16 09:42:51.491502 kubelet[2640]: I0516 09:42:51.491451 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba-typha-certs\") pod \"calico-typha-cf5fbcd59-tjjd2\" (UID: \"af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba\") " pod="calico-system/calico-typha-cf5fbcd59-tjjd2" May 16 09:42:51.491969 kubelet[2640]: I0516 09:42:51.491472 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfwj\" (UniqueName: \"kubernetes.io/projected/af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba-kube-api-access-btfwj\") pod \"calico-typha-cf5fbcd59-tjjd2\" (UID: \"af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba\") " pod="calico-system/calico-typha-cf5fbcd59-tjjd2" May 16 09:42:51.653287 systemd[1]: Created slice kubepods-besteffort-podfd284fac_0577_4390_bd18_0a54500dad75.slice - libcontainer container kubepods-besteffort-podfd284fac_0577_4390_bd18_0a54500dad75.slice. May 16 09:42:51.693361 kubelet[2640]: I0516 09:42:51.693313 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-cni-bin-dir\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.693361 kubelet[2640]: I0516 09:42:51.693358 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-var-run-calico\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.693361 kubelet[2640]: I0516 09:42:51.693374 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-cni-log-dir\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694466 kubelet[2640]: I0516 09:42:51.694413 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-flexvol-driver-host\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694540 kubelet[2640]: I0516 09:42:51.694472 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-var-lib-calico\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694540 kubelet[2640]: I0516 09:42:51.694489 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-policysync\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694540 kubelet[2640]: I0516 09:42:51.694504 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd284fac-0577-4390-bd18-0a54500dad75-tigera-ca-bundle\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694540 kubelet[2640]: I0516 09:42:51.694529 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-cni-net-dir\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694633 kubelet[2640]: I0516 09:42:51.694546 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-xtables-lock\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694633 kubelet[2640]: I0516 09:42:51.694563 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd284fac-0577-4390-bd18-0a54500dad75-node-certs\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694633 kubelet[2640]: I0516 09:42:51.694579 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcst\" (UniqueName: \"kubernetes.io/projected/fd284fac-0577-4390-bd18-0a54500dad75-kube-api-access-phcst\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.694633 kubelet[2640]: I0516 09:42:51.694608 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd284fac-0577-4390-bd18-0a54500dad75-lib-modules\") pod \"calico-node-d5w78\" (UID: \"fd284fac-0577-4390-bd18-0a54500dad75\") " pod="calico-system/calico-node-d5w78" May 16 09:42:51.779800 kubelet[2640]: E0516 09:42:51.779731 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:51.780473 containerd[1522]: time="2025-05-16T09:42:51.780407826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf5fbcd59-tjjd2,Uid:af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba,Namespace:calico-system,Attempt:0,}" May 16 09:42:51.799764 kubelet[2640]: E0516 09:42:51.799714 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.799764 kubelet[2640]: W0516 09:42:51.799736 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.801511 kubelet[2640]: E0516 09:42:51.801052 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.801511 kubelet[2640]: W0516 09:42:51.801070 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.803483 kubelet[2640]: E0516 09:42:51.803443 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.804245 kubelet[2640]: E0516 09:42:51.804213 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.808630 kubelet[2640]: E0516 09:42:51.808547 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.808630 kubelet[2640]: W0516 09:42:51.808563 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.808630 kubelet[2640]: E0516 09:42:51.808586 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.821633 containerd[1522]: time="2025-05-16T09:42:51.821587205Z" level=info msg="connecting to shim 06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c" address="unix:///run/containerd/s/f79d1e5ba06bb77b1229cb9029a97bb9c5a0d93e2cf06218e7e0046fa1ae5d53" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:51.854352 kubelet[2640]: E0516 09:42:51.854002 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:42:51.880247 kubelet[2640]: E0516 09:42:51.880214 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.880247 kubelet[2640]: W0516 09:42:51.880236 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.880396 kubelet[2640]: E0516 09:42:51.880256 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.880450 kubelet[2640]: E0516 09:42:51.880434 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.880488 kubelet[2640]: W0516 09:42:51.880447 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.880527 kubelet[2640]: E0516 09:42:51.880490 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.880671 kubelet[2640]: E0516 09:42:51.880658 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.880671 kubelet[2640]: W0516 09:42:51.880670 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.880760 kubelet[2640]: E0516 09:42:51.880680 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.880864 kubelet[2640]: E0516 09:42:51.880848 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.880900 kubelet[2640]: W0516 09:42:51.880876 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.880900 kubelet[2640]: E0516 09:42:51.880887 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881079 kubelet[2640]: E0516 09:42:51.881064 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881079 kubelet[2640]: W0516 09:42:51.881076 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881138 kubelet[2640]: E0516 09:42:51.881090 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881233 kubelet[2640]: E0516 09:42:51.881221 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881233 kubelet[2640]: W0516 09:42:51.881232 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881282 kubelet[2640]: E0516 09:42:51.881239 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881355 kubelet[2640]: E0516 09:42:51.881345 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881379 kubelet[2640]: W0516 09:42:51.881354 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881379 kubelet[2640]: E0516 09:42:51.881362 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881499 kubelet[2640]: E0516 09:42:51.881488 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881525 kubelet[2640]: W0516 09:42:51.881500 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881525 kubelet[2640]: E0516 09:42:51.881507 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881650 kubelet[2640]: E0516 09:42:51.881638 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881650 kubelet[2640]: W0516 09:42:51.881648 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881762 kubelet[2640]: E0516 09:42:51.881655 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881795 kubelet[2640]: E0516 09:42:51.881782 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881795 kubelet[2640]: W0516 09:42:51.881789 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.881842 kubelet[2640]: E0516 09:42:51.881796 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.881959 kubelet[2640]: E0516 09:42:51.881944 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.881959 kubelet[2640]: W0516 09:42:51.881956 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882016 kubelet[2640]: E0516 09:42:51.881964 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882097 kubelet[2640]: E0516 09:42:51.882086 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882126 kubelet[2640]: W0516 09:42:51.882097 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882126 kubelet[2640]: E0516 09:42:51.882106 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882241 kubelet[2640]: E0516 09:42:51.882230 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882265 kubelet[2640]: W0516 09:42:51.882240 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882265 kubelet[2640]: E0516 09:42:51.882248 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882368 kubelet[2640]: E0516 09:42:51.882358 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882400 kubelet[2640]: W0516 09:42:51.882368 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882400 kubelet[2640]: E0516 09:42:51.882375 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882503 kubelet[2640]: E0516 09:42:51.882493 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882524 kubelet[2640]: W0516 09:42:51.882503 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882524 kubelet[2640]: E0516 09:42:51.882510 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882634 kubelet[2640]: E0516 09:42:51.882624 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882657 kubelet[2640]: W0516 09:42:51.882634 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882657 kubelet[2640]: E0516 09:42:51.882641 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882817 kubelet[2640]: E0516 09:42:51.882804 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882817 kubelet[2640]: W0516 09:42:51.882815 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882872 kubelet[2640]: E0516 09:42:51.882823 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.882953 kubelet[2640]: E0516 09:42:51.882941 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.882953 kubelet[2640]: W0516 09:42:51.882951 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.882995 kubelet[2640]: E0516 09:42:51.882960 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.883078 kubelet[2640]: E0516 09:42:51.883068 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.883102 kubelet[2640]: W0516 09:42:51.883078 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.883102 kubelet[2640]: E0516 09:42:51.883085 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.883209 kubelet[2640]: E0516 09:42:51.883199 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.883229 kubelet[2640]: W0516 09:42:51.883209 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.883229 kubelet[2640]: E0516 09:42:51.883217 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.884929 systemd[1]: Started cri-containerd-06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c.scope - libcontainer container 06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c. May 16 09:42:51.897022 kubelet[2640]: E0516 09:42:51.896996 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.897022 kubelet[2640]: W0516 09:42:51.897013 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.897022 kubelet[2640]: E0516 09:42:51.897028 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.897812 kubelet[2640]: I0516 09:42:51.897055 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/598d8256-d9c2-44b4-9749-6e31a479eb52-kubelet-dir\") pod \"csi-node-driver-tzdcv\" (UID: \"598d8256-d9c2-44b4-9749-6e31a479eb52\") " pod="calico-system/csi-node-driver-tzdcv" May 16 09:42:51.897812 kubelet[2640]: E0516 09:42:51.897216 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.897812 kubelet[2640]: W0516 09:42:51.897226 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.897812 kubelet[2640]: E0516 09:42:51.897240 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.897812 kubelet[2640]: I0516 09:42:51.897255 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/598d8256-d9c2-44b4-9749-6e31a479eb52-varrun\") pod \"csi-node-driver-tzdcv\" (UID: \"598d8256-d9c2-44b4-9749-6e31a479eb52\") " pod="calico-system/csi-node-driver-tzdcv" May 16 09:42:51.897812 kubelet[2640]: E0516 09:42:51.897415 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.897812 kubelet[2640]: W0516 09:42:51.897425 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.897812 kubelet[2640]: E0516 09:42:51.897439 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898017 kubelet[2640]: I0516 09:42:51.897453 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/598d8256-d9c2-44b4-9749-6e31a479eb52-registration-dir\") pod \"csi-node-driver-tzdcv\" (UID: \"598d8256-d9c2-44b4-9749-6e31a479eb52\") " pod="calico-system/csi-node-driver-tzdcv" May 16 09:42:51.898017 kubelet[2640]: E0516 09:42:51.897599 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898017 kubelet[2640]: W0516 09:42:51.897611 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.898017 kubelet[2640]: E0516 09:42:51.897623 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898017 kubelet[2640]: I0516 09:42:51.897640 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttldr\" (UniqueName: \"kubernetes.io/projected/598d8256-d9c2-44b4-9749-6e31a479eb52-kube-api-access-ttldr\") pod \"csi-node-driver-tzdcv\" (UID: \"598d8256-d9c2-44b4-9749-6e31a479eb52\") " pod="calico-system/csi-node-driver-tzdcv" May 16 09:42:51.898017 kubelet[2640]: E0516 09:42:51.897791 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898017 kubelet[2640]: W0516 09:42:51.897801 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.898017 kubelet[2640]: E0516 09:42:51.897821 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898197 kubelet[2640]: I0516 09:42:51.897860 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/598d8256-d9c2-44b4-9749-6e31a479eb52-socket-dir\") pod \"csi-node-driver-tzdcv\" (UID: \"598d8256-d9c2-44b4-9749-6e31a479eb52\") " pod="calico-system/csi-node-driver-tzdcv" May 16 09:42:51.898197 kubelet[2640]: E0516 09:42:51.898079 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898197 kubelet[2640]: W0516 09:42:51.898089 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.898197 kubelet[2640]: E0516 09:42:51.898105 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898298 kubelet[2640]: E0516 09:42:51.898274 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898298 kubelet[2640]: W0516 09:42:51.898283 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.898340 kubelet[2640]: E0516 09:42:51.898298 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898483 kubelet[2640]: E0516 09:42:51.898460 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898483 kubelet[2640]: W0516 09:42:51.898481 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.898552 kubelet[2640]: E0516 09:42:51.898494 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.898678 kubelet[2640]: E0516 09:42:51.898657 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.898678 kubelet[2640]: W0516 09:42:51.898669 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.899071 kubelet[2640]: E0516 09:42:51.898719 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.899071 kubelet[2640]: E0516 09:42:51.898870 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.899071 kubelet[2640]: W0516 09:42:51.898878 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.899071 kubelet[2640]: E0516 09:42:51.898905 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.899162 kubelet[2640]: E0516 09:42:51.899080 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.899162 kubelet[2640]: W0516 09:42:51.899090 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.899162 kubelet[2640]: E0516 09:42:51.899117 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.899290 kubelet[2640]: E0516 09:42:51.899269 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.899290 kubelet[2640]: W0516 09:42:51.899282 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899334 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899415 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.900788 kubelet[2640]: W0516 09:42:51.899424 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899432 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899604 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.900788 kubelet[2640]: W0516 09:42:51.899612 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899621 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899780 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.900788 kubelet[2640]: W0516 09:42:51.899788 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.900788 kubelet[2640]: E0516 09:42:51.899796 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.914107 containerd[1522]: time="2025-05-16T09:42:51.914056427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf5fbcd59-tjjd2,Uid:af33d0dc-fbd1-4a3d-a3b0-732f99bfe2ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c\"" May 16 09:42:51.915173 kubelet[2640]: E0516 09:42:51.915144 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:51.917357 containerd[1522]: time="2025-05-16T09:42:51.917055661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 16 09:42:51.956220 kubelet[2640]: E0516 09:42:51.956187 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:51.957774 containerd[1522]: time="2025-05-16T09:42:51.957231336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5w78,Uid:fd284fac-0577-4390-bd18-0a54500dad75,Namespace:calico-system,Attempt:0,}" May 16 09:42:51.985857 containerd[1522]: time="2025-05-16T09:42:51.985816465Z" level=info msg="connecting to shim 146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066" address="unix:///run/containerd/s/8046153fef3344e7c132526a1f049bab6a6db32fd9db5dbb7c0053409cf4de41" namespace=k8s.io protocol=ttrpc version=3 May 16 09:42:51.999348 kubelet[2640]: E0516 09:42:51.999314 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.999348 kubelet[2640]: W0516 09:42:51.999338 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.999500 kubelet[2640]: E0516 09:42:51.999357 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:51.999732 kubelet[2640]: E0516 09:42:51.999719 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:51.999732 kubelet[2640]: W0516 09:42:51.999731 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:51.999817 kubelet[2640]: E0516 09:42:51.999786 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.000062 kubelet[2640]: E0516 09:42:52.000041 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.000062 kubelet[2640]: W0516 09:42:52.000054 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.000122 kubelet[2640]: E0516 09:42:52.000078 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.000409 kubelet[2640]: E0516 09:42:52.000392 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.000409 kubelet[2640]: W0516 09:42:52.000408 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.000469 kubelet[2640]: E0516 09:42:52.000427 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.000592 kubelet[2640]: E0516 09:42:52.000580 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.000703 kubelet[2640]: W0516 09:42:52.000680 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.000732 kubelet[2640]: E0516 09:42:52.000704 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.000896 kubelet[2640]: E0516 09:42:52.000878 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.000896 kubelet[2640]: W0516 09:42:52.000891 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.000951 kubelet[2640]: E0516 09:42:52.000908 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.001189 kubelet[2640]: E0516 09:42:52.001169 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.001189 kubelet[2640]: W0516 09:42:52.001180 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.001264 kubelet[2640]: E0516 09:42:52.001235 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.001451 kubelet[2640]: E0516 09:42:52.001373 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.001451 kubelet[2640]: W0516 09:42:52.001386 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.001451 kubelet[2640]: E0516 09:42:52.001421 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.001602 kubelet[2640]: E0516 09:42:52.001530 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.001602 kubelet[2640]: W0516 09:42:52.001551 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.001602 kubelet[2640]: E0516 09:42:52.001593 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.001705 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002756 kubelet[2640]: W0516 09:42:52.001715 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.001848 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002756 kubelet[2640]: W0516 09:42:52.001856 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.001866 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.001957 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.002020 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002756 kubelet[2640]: W0516 09:42:52.002028 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.002037 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002756 kubelet[2640]: E0516 09:42:52.002200 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002992 kubelet[2640]: W0516 09:42:52.002210 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002221 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002396 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002992 kubelet[2640]: W0516 09:42:52.002404 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002413 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002536 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002992 kubelet[2640]: W0516 09:42:52.002543 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002552 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.002992 kubelet[2640]: E0516 09:42:52.002662 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.002992 kubelet[2640]: W0516 09:42:52.002669 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003174 kubelet[2640]: E0516 09:42:52.002677 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003174 kubelet[2640]: E0516 09:42:52.003042 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.003174 kubelet[2640]: W0516 09:42:52.003052 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003174 kubelet[2640]: E0516 09:42:52.003072 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003250 kubelet[2640]: E0516 09:42:52.003227 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.003250 kubelet[2640]: W0516 09:42:52.003235 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003289 kubelet[2640]: E0516 09:42:52.003257 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003402 kubelet[2640]: E0516 09:42:52.003383 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.003402 kubelet[2640]: W0516 09:42:52.003396 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003465 kubelet[2640]: E0516 09:42:52.003418 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003604 kubelet[2640]: E0516 09:42:52.003577 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.003604 kubelet[2640]: W0516 09:42:52.003589 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003666 kubelet[2640]: E0516 09:42:52.003611 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003803 kubelet[2640]: E0516 09:42:52.003789 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.003803 kubelet[2640]: W0516 09:42:52.003801 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.003873 kubelet[2640]: E0516 09:42:52.003818 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.003967 systemd[1]: Started cri-containerd-146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066.scope - libcontainer container 146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066. May 16 09:42:52.004052 kubelet[2640]: E0516 09:42:52.004003 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.004052 kubelet[2640]: W0516 09:42:52.004011 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.004052 kubelet[2640]: E0516 09:42:52.004025 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.004188 kubelet[2640]: E0516 09:42:52.004168 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.004188 kubelet[2640]: W0516 09:42:52.004179 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.004188 kubelet[2640]: E0516 09:42:52.004188 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.004358 kubelet[2640]: E0516 09:42:52.004342 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.004358 kubelet[2640]: W0516 09:42:52.004352 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.004417 kubelet[2640]: E0516 09:42:52.004368 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.004537 kubelet[2640]: E0516 09:42:52.004519 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.004537 kubelet[2640]: W0516 09:42:52.004531 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.004590 kubelet[2640]: E0516 09:42:52.004539 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.016403 kubelet[2640]: E0516 09:42:52.016355 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:52.016403 kubelet[2640]: W0516 09:42:52.016382 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:52.016403 kubelet[2640]: E0516 09:42:52.016396 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:52.037934 containerd[1522]: time="2025-05-16T09:42:52.037587500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d5w78,Uid:fd284fac-0577-4390-bd18-0a54500dad75,Namespace:calico-system,Attempt:0,} returns sandbox id \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\"" May 16 09:42:52.038602 kubelet[2640]: E0516 09:42:52.038540 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:53.203457 kubelet[2640]: E0516 09:42:53.203406 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:42:54.496303 update_engine[1513]: I20250516 09:42:54.496237 1513 update_attempter.cc:509] Updating boot flags... May 16 09:42:54.900127 containerd[1522]: time="2025-05-16T09:42:54.899468893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:54.900127 containerd[1522]: time="2025-05-16T09:42:54.900103020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 16 09:42:54.900809 containerd[1522]: time="2025-05-16T09:42:54.900773941Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:54.902862 containerd[1522]: time="2025-05-16T09:42:54.902824658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:42:54.903771 containerd[1522]: time="2025-05-16T09:42:54.903708901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.986072484s" May 16 09:42:54.903771 containerd[1522]: time="2025-05-16T09:42:54.903759292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 16 09:42:54.905036 containerd[1522]: time="2025-05-16T09:42:54.904934444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 16 09:42:54.919979 containerd[1522]: time="2025-05-16T09:42:54.919770813Z" level=info msg="CreateContainer within sandbox \"06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 09:42:54.928042 containerd[1522]: time="2025-05-16T09:42:54.928008232Z" level=info msg="Container 5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6: CDI devices from CRI Config.CDIDevices: []" May 16 09:42:54.933632 containerd[1522]: time="2025-05-16T09:42:54.933582884Z" level=info msg="CreateContainer within sandbox \"06cae3dfc61ce7aae8b50414b5cdba2a730b1ee19a5b35d02377ea8298225c7c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6\"" May 16 09:42:54.934059 containerd[1522]: time="2025-05-16T09:42:54.934028725Z" level=info msg="StartContainer for \"5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6\"" May 16 09:42:54.935999 containerd[1522]: time="2025-05-16T09:42:54.935964902Z" level=info msg="connecting to shim 5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6" address="unix:///run/containerd/s/f79d1e5ba06bb77b1229cb9029a97bb9c5a0d93e2cf06218e7e0046fa1ae5d53" protocol=ttrpc version=3 May 16 09:42:54.961987 systemd[1]: Started cri-containerd-5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6.scope - libcontainer container 5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6. May 16 09:42:55.023137 containerd[1522]: time="2025-05-16T09:42:55.023096701Z" level=info msg="StartContainer for \"5a7aa8b7cc43979a6a452819cf2f41b56bcb31a93870325b0d314e96ce4641b6\" returns successfully" May 16 09:42:55.204286 kubelet[2640]: E0516 09:42:55.204174 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:42:55.278922 kubelet[2640]: E0516 09:42:55.278878 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:55.304642 kubelet[2640]: E0516 09:42:55.304606 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.304642 kubelet[2640]: W0516 09:42:55.304627 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.304818 kubelet[2640]: E0516 09:42:55.304659 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.304862 kubelet[2640]: E0516 09:42:55.304843 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.304888 kubelet[2640]: W0516 09:42:55.304855 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.304910 kubelet[2640]: E0516 09:42:55.304893 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305088 kubelet[2640]: E0516 09:42:55.305061 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305088 kubelet[2640]: W0516 09:42:55.305073 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305088 kubelet[2640]: E0516 09:42:55.305082 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305234 kubelet[2640]: E0516 09:42:55.305213 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305234 kubelet[2640]: W0516 09:42:55.305225 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305234 kubelet[2640]: E0516 09:42:55.305232 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305366 kubelet[2640]: E0516 09:42:55.305355 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305394 kubelet[2640]: W0516 09:42:55.305366 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305394 kubelet[2640]: E0516 09:42:55.305373 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305491 kubelet[2640]: E0516 09:42:55.305481 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305514 kubelet[2640]: W0516 09:42:55.305490 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305514 kubelet[2640]: E0516 09:42:55.305498 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305617 kubelet[2640]: E0516 09:42:55.305607 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305642 kubelet[2640]: W0516 09:42:55.305617 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305642 kubelet[2640]: E0516 09:42:55.305624 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305773 kubelet[2640]: E0516 09:42:55.305762 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305806 kubelet[2640]: W0516 09:42:55.305773 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305806 kubelet[2640]: E0516 09:42:55.305781 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.305923 kubelet[2640]: E0516 09:42:55.305911 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.305923 kubelet[2640]: W0516 09:42:55.305922 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.305965 kubelet[2640]: E0516 09:42:55.305931 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306062 kubelet[2640]: E0516 09:42:55.306051 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306085 kubelet[2640]: W0516 09:42:55.306062 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306085 kubelet[2640]: E0516 09:42:55.306069 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306188 kubelet[2640]: E0516 09:42:55.306179 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306211 kubelet[2640]: W0516 09:42:55.306188 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306211 kubelet[2640]: E0516 09:42:55.306196 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306324 kubelet[2640]: E0516 09:42:55.306313 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306349 kubelet[2640]: W0516 09:42:55.306325 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306349 kubelet[2640]: E0516 09:42:55.306333 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306456 kubelet[2640]: E0516 09:42:55.306447 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306479 kubelet[2640]: W0516 09:42:55.306456 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306479 kubelet[2640]: E0516 09:42:55.306463 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306584 kubelet[2640]: E0516 09:42:55.306573 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306606 kubelet[2640]: W0516 09:42:55.306584 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306606 kubelet[2640]: E0516 09:42:55.306590 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.306712 kubelet[2640]: E0516 09:42:55.306701 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.306741 kubelet[2640]: W0516 09:42:55.306712 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.306741 kubelet[2640]: E0516 09:42:55.306721 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.326256 kubelet[2640]: E0516 09:42:55.326212 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.326256 kubelet[2640]: W0516 09:42:55.326243 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.326256 kubelet[2640]: E0516 09:42:55.326259 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.326494 kubelet[2640]: E0516 09:42:55.326466 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.326494 kubelet[2640]: W0516 09:42:55.326479 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.326494 kubelet[2640]: E0516 09:42:55.326495 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.326668 kubelet[2640]: E0516 09:42:55.326645 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.326668 kubelet[2640]: W0516 09:42:55.326657 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.326715 kubelet[2640]: E0516 09:42:55.326681 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.326946 kubelet[2640]: E0516 09:42:55.326931 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.326946 kubelet[2640]: W0516 09:42:55.326943 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327010 kubelet[2640]: E0516 09:42:55.326958 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.327117 kubelet[2640]: E0516 09:42:55.327104 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.327117 kubelet[2640]: W0516 09:42:55.327113 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327160 kubelet[2640]: E0516 09:42:55.327126 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.327264 kubelet[2640]: E0516 09:42:55.327254 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.327288 kubelet[2640]: W0516 09:42:55.327264 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327288 kubelet[2640]: E0516 09:42:55.327275 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.327452 kubelet[2640]: E0516 09:42:55.327431 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.327452 kubelet[2640]: W0516 09:42:55.327443 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327502 kubelet[2640]: E0516 09:42:55.327456 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.327729 kubelet[2640]: E0516 09:42:55.327698 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.327780 kubelet[2640]: W0516 09:42:55.327726 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327780 kubelet[2640]: E0516 09:42:55.327759 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.327928 kubelet[2640]: E0516 09:42:55.327915 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.327928 kubelet[2640]: W0516 09:42:55.327926 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.327990 kubelet[2640]: E0516 09:42:55.327951 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328103 kubelet[2640]: E0516 09:42:55.328090 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328103 kubelet[2640]: W0516 09:42:55.328102 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.328147 kubelet[2640]: E0516 09:42:55.328124 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328242 kubelet[2640]: E0516 09:42:55.328232 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328265 kubelet[2640]: W0516 09:42:55.328242 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.328265 kubelet[2640]: E0516 09:42:55.328254 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328413 kubelet[2640]: E0516 09:42:55.328402 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328436 kubelet[2640]: W0516 09:42:55.328413 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.328436 kubelet[2640]: E0516 09:42:55.328426 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328587 kubelet[2640]: E0516 09:42:55.328576 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328587 kubelet[2640]: W0516 09:42:55.328586 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.328634 kubelet[2640]: E0516 09:42:55.328599 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328844 kubelet[2640]: E0516 09:42:55.328827 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328884 kubelet[2640]: W0516 09:42:55.328844 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.328884 kubelet[2640]: E0516 09:42:55.328857 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.328998 kubelet[2640]: E0516 09:42:55.328984 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.328998 kubelet[2640]: W0516 09:42:55.328996 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.329059 kubelet[2640]: E0516 09:42:55.329009 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.329190 kubelet[2640]: E0516 09:42:55.329178 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.329190 kubelet[2640]: W0516 09:42:55.329188 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.329299 kubelet[2640]: E0516 09:42:55.329202 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.329425 kubelet[2640]: E0516 09:42:55.329408 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.329454 kubelet[2640]: W0516 09:42:55.329430 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.329454 kubelet[2640]: E0516 09:42:55.329447 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:55.329610 kubelet[2640]: E0516 09:42:55.329598 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:55.329610 kubelet[2640]: W0516 09:42:55.329608 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:55.329669 kubelet[2640]: E0516 09:42:55.329616 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.279810 kubelet[2640]: I0516 09:42:56.279771 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 09:42:56.280147 kubelet[2640]: E0516 09:42:56.280091 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:42:56.313244 kubelet[2640]: E0516 09:42:56.313209 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.313244 kubelet[2640]: W0516 09:42:56.313231 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.313244 kubelet[2640]: E0516 09:42:56.313250 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.313448 kubelet[2640]: E0516 09:42:56.313419 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.313448 kubelet[2640]: W0516 09:42:56.313432 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.313448 kubelet[2640]: E0516 09:42:56.313441 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.313612 kubelet[2640]: E0516 09:42:56.313588 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.313612 kubelet[2640]: W0516 09:42:56.313600 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.313612 kubelet[2640]: E0516 09:42:56.313608 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.313783 kubelet[2640]: E0516 09:42:56.313761 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.313783 kubelet[2640]: W0516 09:42:56.313773 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.313783 kubelet[2640]: E0516 09:42:56.313781 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.313936 kubelet[2640]: E0516 09:42:56.313916 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.313936 kubelet[2640]: W0516 09:42:56.313928 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.313986 kubelet[2640]: E0516 09:42:56.313944 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314072 kubelet[2640]: E0516 09:42:56.314061 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314093 kubelet[2640]: W0516 09:42:56.314071 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314093 kubelet[2640]: E0516 09:42:56.314080 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314214 kubelet[2640]: E0516 09:42:56.314203 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314235 kubelet[2640]: W0516 09:42:56.314213 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314235 kubelet[2640]: E0516 09:42:56.314223 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314345 kubelet[2640]: E0516 09:42:56.314336 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314366 kubelet[2640]: W0516 09:42:56.314346 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314366 kubelet[2640]: E0516 09:42:56.314353 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314510 kubelet[2640]: E0516 09:42:56.314497 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314510 kubelet[2640]: W0516 09:42:56.314507 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314570 kubelet[2640]: E0516 09:42:56.314515 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314645 kubelet[2640]: E0516 09:42:56.314632 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314645 kubelet[2640]: W0516 09:42:56.314643 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314686 kubelet[2640]: E0516 09:42:56.314651 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314783 kubelet[2640]: E0516 09:42:56.314773 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314814 kubelet[2640]: W0516 09:42:56.314783 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314814 kubelet[2640]: E0516 09:42:56.314792 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.314935 kubelet[2640]: E0516 09:42:56.314925 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.314959 kubelet[2640]: W0516 09:42:56.314935 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.314959 kubelet[2640]: E0516 09:42:56.314942 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.315088 kubelet[2640]: E0516 09:42:56.315076 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.315088 kubelet[2640]: W0516 09:42:56.315086 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.315140 kubelet[2640]: E0516 09:42:56.315093 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.315233 kubelet[2640]: E0516 09:42:56.315222 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.315233 kubelet[2640]: W0516 09:42:56.315232 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.315275 kubelet[2640]: E0516 09:42:56.315239 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.315362 kubelet[2640]: E0516 09:42:56.315352 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.315383 kubelet[2640]: W0516 09:42:56.315362 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.315383 kubelet[2640]: E0516 09:42:56.315369 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.334356 kubelet[2640]: E0516 09:42:56.334321 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.334356 kubelet[2640]: W0516 09:42:56.334342 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.334356 kubelet[2640]: E0516 09:42:56.334356 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.334574 kubelet[2640]: E0516 09:42:56.334539 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.334574 kubelet[2640]: W0516 09:42:56.334558 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.334634 kubelet[2640]: E0516 09:42:56.334579 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.334782 kubelet[2640]: E0516 09:42:56.334742 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.334782 kubelet[2640]: W0516 09:42:56.334769 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.334835 kubelet[2640]: E0516 09:42:56.334784 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.334980 kubelet[2640]: E0516 09:42:56.334954 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.334980 kubelet[2640]: W0516 09:42:56.334969 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335029 kubelet[2640]: E0516 09:42:56.334987 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.335159 kubelet[2640]: E0516 09:42:56.335136 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.335159 kubelet[2640]: W0516 09:42:56.335149 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335204 kubelet[2640]: E0516 09:42:56.335160 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.335328 kubelet[2640]: E0516 09:42:56.335308 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.335328 kubelet[2640]: W0516 09:42:56.335320 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335373 kubelet[2640]: E0516 09:42:56.335330 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.335500 kubelet[2640]: E0516 09:42:56.335488 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.335522 kubelet[2640]: W0516 09:42:56.335499 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335573 kubelet[2640]: E0516 09:42:56.335553 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.335663 kubelet[2640]: E0516 09:42:56.335651 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.335690 kubelet[2640]: W0516 09:42:56.335663 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335710 kubelet[2640]: E0516 09:42:56.335691 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.335837 kubelet[2640]: E0516 09:42:56.335823 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.335837 kubelet[2640]: W0516 09:42:56.335834 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.335894 kubelet[2640]: E0516 09:42:56.335847 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336002 kubelet[2640]: E0516 09:42:56.335988 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336002 kubelet[2640]: W0516 09:42:56.335999 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336068 kubelet[2640]: E0516 09:42:56.336011 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336148 kubelet[2640]: E0516 09:42:56.336137 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336148 kubelet[2640]: W0516 09:42:56.336147 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336196 kubelet[2640]: E0516 09:42:56.336158 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336311 kubelet[2640]: E0516 09:42:56.336300 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336342 kubelet[2640]: W0516 09:42:56.336311 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336342 kubelet[2640]: E0516 09:42:56.336323 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336568 kubelet[2640]: E0516 09:42:56.336543 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336600 kubelet[2640]: W0516 09:42:56.336569 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336600 kubelet[2640]: E0516 09:42:56.336589 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336740 kubelet[2640]: E0516 09:42:56.336729 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336740 kubelet[2640]: W0516 09:42:56.336741 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336821 kubelet[2640]: E0516 09:42:56.336765 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.336907 kubelet[2640]: E0516 09:42:56.336896 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.336932 kubelet[2640]: W0516 09:42:56.336907 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.336932 kubelet[2640]: E0516 09:42:56.336920 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.337074 kubelet[2640]: E0516 09:42:56.337063 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.337110 kubelet[2640]: W0516 09:42:56.337074 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.337110 kubelet[2640]: E0516 09:42:56.337088 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.337298 kubelet[2640]: E0516 09:42:56.337282 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.337325 kubelet[2640]: W0516 09:42:56.337297 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.337325 kubelet[2640]: E0516 09:42:56.337312 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:56.337455 kubelet[2640]: E0516 09:42:56.337442 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 09:42:56.337489 kubelet[2640]: W0516 09:42:56.337455 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 09:42:56.337489 kubelet[2640]: E0516 09:42:56.337465 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 09:42:57.204995 kubelet[2640]: E0516 09:42:57.204945 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:42:59.203589 kubelet[2640]: E0516 09:42:59.203541 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:01.204006 kubelet[2640]: E0516 09:43:01.203953 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:02.763167 containerd[1522]: time="2025-05-16T09:43:02.763097630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:02.764095 containerd[1522]: time="2025-05-16T09:43:02.764064088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 16 09:43:02.765003 containerd[1522]: time="2025-05-16T09:43:02.764957273Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:02.766615 containerd[1522]: time="2025-05-16T09:43:02.766591580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:02.767171 containerd[1522]: time="2025-05-16T09:43:02.767144922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 7.862158287s" May 16 09:43:02.767237 containerd[1522]: time="2025-05-16T09:43:02.767173999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 16 09:43:02.768821 containerd[1522]: time="2025-05-16T09:43:02.768789908Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 09:43:02.789113 containerd[1522]: time="2025-05-16T09:43:02.789079481Z" level=info msg="Container 18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:02.796308 containerd[1522]: time="2025-05-16T09:43:02.796273000Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\"" May 16 09:43:02.797368 containerd[1522]: time="2025-05-16T09:43:02.796741031Z" level=info msg="StartContainer for \"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\"" May 16 09:43:02.798129 containerd[1522]: time="2025-05-16T09:43:02.798100567Z" level=info msg="connecting to shim 18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8" address="unix:///run/containerd/s/8046153fef3344e7c132526a1f049bab6a6db32fd9db5dbb7c0053409cf4de41" protocol=ttrpc version=3 May 16 09:43:02.820901 systemd[1]: Started cri-containerd-18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8.scope - libcontainer container 18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8. May 16 09:43:02.853570 containerd[1522]: time="2025-05-16T09:43:02.853515544Z" level=info msg="StartContainer for \"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\" returns successfully" May 16 09:43:02.894823 systemd[1]: cri-containerd-18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8.scope: Deactivated successfully. May 16 09:43:02.895085 systemd[1]: cri-containerd-18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8.scope: Consumed 51ms CPU time, 7.9M memory peak, 4.2M written to disk. May 16 09:43:02.909438 containerd[1522]: time="2025-05-16T09:43:02.909371995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\" id:\"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\" pid:3351 exited_at:{seconds:1747388582 nanos:901634814}" May 16 09:43:02.909438 containerd[1522]: time="2025-05-16T09:43:02.909381074Z" level=info msg="received exit event container_id:\"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\" id:\"18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8\" pid:3351 exited_at:{seconds:1747388582 nanos:901634814}" May 16 09:43:02.938763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18deab29d0d2d4a7c2fe9565473824612b30828804c1aa25f08cebffc9c293c8-rootfs.mount: Deactivated successfully. May 16 09:43:03.203644 kubelet[2640]: E0516 09:43:03.203496 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:03.292697 kubelet[2640]: E0516 09:43:03.292666 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:03.294809 containerd[1522]: time="2025-05-16T09:43:03.293926246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 16 09:43:03.307511 kubelet[2640]: I0516 09:43:03.307448 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf5fbcd59-tjjd2" podStartSLOduration=9.319043093 podStartE2EDuration="12.307430586s" podCreationTimestamp="2025-05-16 09:42:51 +0000 UTC" firstStartedPulling="2025-05-16 09:42:51.916030002 +0000 UTC m=+13.783757713" lastFinishedPulling="2025-05-16 09:42:54.904417495 +0000 UTC m=+16.772145206" observedRunningTime="2025-05-16 09:42:55.288947232 +0000 UTC m=+17.156674943" watchObservedRunningTime="2025-05-16 09:43:03.307430586 +0000 UTC m=+25.175158257" May 16 09:43:05.203528 kubelet[2640]: E0516 09:43:05.203469 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:05.624504 kubelet[2640]: I0516 09:43:05.624421 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 09:43:05.624724 kubelet[2640]: E0516 09:43:05.624706 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:06.296894 kubelet[2640]: E0516 09:43:06.296802 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:07.203433 kubelet[2640]: E0516 09:43:07.203376 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:07.422422 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:49890.service - OpenSSH per-connection server daemon (10.0.0.1:49890). May 16 09:43:07.480812 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 49890 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:07.484023 sshd-session[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:07.487777 systemd-logind[1512]: New session 8 of user core. May 16 09:43:07.498896 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 09:43:07.629514 sshd[3395]: Connection closed by 10.0.0.1 port 49890 May 16 09:43:07.629857 sshd-session[3393]: pam_unix(sshd:session): session closed for user core May 16 09:43:07.634036 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:49890.service: Deactivated successfully. May 16 09:43:07.636028 systemd[1]: session-8.scope: Deactivated successfully. May 16 09:43:07.636994 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. May 16 09:43:07.638144 systemd-logind[1512]: Removed session 8. May 16 09:43:09.065530 containerd[1522]: time="2025-05-16T09:43:09.065483512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:09.065951 containerd[1522]: time="2025-05-16T09:43:09.065910723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 16 09:43:09.066687 containerd[1522]: time="2025-05-16T09:43:09.066655633Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:09.068387 containerd[1522]: time="2025-05-16T09:43:09.068351559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:09.069186 containerd[1522]: time="2025-05-16T09:43:09.069156665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 5.775190822s" May 16 09:43:09.069219 containerd[1522]: time="2025-05-16T09:43:09.069187542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 16 09:43:09.071072 containerd[1522]: time="2025-05-16T09:43:09.071039578Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 09:43:09.079090 containerd[1522]: time="2025-05-16T09:43:09.079062198Z" level=info msg="Container cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:09.086873 containerd[1522]: time="2025-05-16T09:43:09.086828275Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\"" May 16 09:43:09.087271 containerd[1522]: time="2025-05-16T09:43:09.087251006Z" level=info msg="StartContainer for \"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\"" May 16 09:43:09.089370 containerd[1522]: time="2025-05-16T09:43:09.089342825Z" level=info msg="connecting to shim cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f" address="unix:///run/containerd/s/8046153fef3344e7c132526a1f049bab6a6db32fd9db5dbb7c0053409cf4de41" protocol=ttrpc version=3 May 16 09:43:09.115071 systemd[1]: Started cri-containerd-cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f.scope - libcontainer container cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f. May 16 09:43:09.157752 containerd[1522]: time="2025-05-16T09:43:09.157699783Z" level=info msg="StartContainer for \"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\" returns successfully" May 16 09:43:09.203628 kubelet[2640]: E0516 09:43:09.203572 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:09.305187 kubelet[2640]: E0516 09:43:09.305150 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:09.628788 systemd[1]: cri-containerd-cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f.scope: Deactivated successfully. May 16 09:43:09.629128 systemd[1]: cri-containerd-cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f.scope: Consumed 434ms CPU time, 162.6M memory peak, 4K read from disk, 150.3M written to disk. May 16 09:43:09.631661 containerd[1522]: time="2025-05-16T09:43:09.631596713Z" level=info msg="received exit event container_id:\"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\" id:\"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\" pid:3438 exited_at:{seconds:1747388589 nanos:631406646}" May 16 09:43:09.631750 containerd[1522]: time="2025-05-16T09:43:09.631707746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\" id:\"cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f\" pid:3438 exited_at:{seconds:1747388589 nanos:631406646}" May 16 09:43:09.648066 kubelet[2640]: I0516 09:43:09.648030 2640 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 16 09:43:09.650899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc1dcdb7756f52e6a2d75f16093b80ffd78e70d7eb565e09492c5074154d27f-rootfs.mount: Deactivated successfully. May 16 09:43:09.780522 kubelet[2640]: I0516 09:43:09.780395 2640 status_manager.go:890] "Failed to get status for pod" podUID="b2290209-aa67-4111-b3fb-fc2e99015f15" pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" err="pods \"calico-apiserver-94d85fb65-qss2s\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" May 16 09:43:09.782079 systemd[1]: Created slice kubepods-besteffort-podb2290209_aa67_4111_b3fb_fc2e99015f15.slice - libcontainer container kubepods-besteffort-podb2290209_aa67_4111_b3fb_fc2e99015f15.slice. May 16 09:43:09.789264 systemd[1]: Created slice kubepods-besteffort-pod4c6e88f0_444b_48d2_b5b8_1d78a43f3e35.slice - libcontainer container kubepods-besteffort-pod4c6e88f0_444b_48d2_b5b8_1d78a43f3e35.slice. May 16 09:43:09.797597 systemd[1]: Created slice kubepods-burstable-pod02f2bcc5_81ed_471b_9e0a_68a1089e0f64.slice - libcontainer container kubepods-burstable-pod02f2bcc5_81ed_471b_9e0a_68a1089e0f64.slice. May 16 09:43:09.802552 systemd[1]: Created slice kubepods-burstable-podac3a38b1_0ebc_4781_af96_f5ad54d64b1d.slice - libcontainer container kubepods-burstable-podac3a38b1_0ebc_4781_af96_f5ad54d64b1d.slice. May 16 09:43:09.806679 systemd[1]: Created slice kubepods-besteffort-pod2a6b8852_00e7_4b63_90e3_1ab7fd7eb08a.slice - libcontainer container kubepods-besteffort-pod2a6b8852_00e7_4b63_90e3_1ab7fd7eb08a.slice. May 16 09:43:09.833325 kubelet[2640]: I0516 09:43:09.833289 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f8rw\" (UniqueName: \"kubernetes.io/projected/b2290209-aa67-4111-b3fb-fc2e99015f15-kube-api-access-5f8rw\") pod \"calico-apiserver-94d85fb65-qss2s\" (UID: \"b2290209-aa67-4111-b3fb-fc2e99015f15\") " pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" May 16 09:43:09.833560 kubelet[2640]: I0516 09:43:09.833488 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k969\" (UniqueName: \"kubernetes.io/projected/ac3a38b1-0ebc-4781-af96-f5ad54d64b1d-kube-api-access-4k969\") pod \"coredns-668d6bf9bc-9wjkd\" (UID: \"ac3a38b1-0ebc-4781-af96-f5ad54d64b1d\") " pod="kube-system/coredns-668d6bf9bc-9wjkd" May 16 09:43:09.833560 kubelet[2640]: I0516 09:43:09.833512 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5qsd\" (UniqueName: \"kubernetes.io/projected/2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a-kube-api-access-b5qsd\") pod \"calico-kube-controllers-f4c99d75c-2cthn\" (UID: \"2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a\") " pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" May 16 09:43:09.833560 kubelet[2640]: I0516 09:43:09.833533 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jf7c\" (UniqueName: \"kubernetes.io/projected/02f2bcc5-81ed-471b-9e0a-68a1089e0f64-kube-api-access-7jf7c\") pod \"coredns-668d6bf9bc-dnjch\" (UID: \"02f2bcc5-81ed-471b-9e0a-68a1089e0f64\") " pod="kube-system/coredns-668d6bf9bc-dnjch" May 16 09:43:09.833655 kubelet[2640]: I0516 09:43:09.833612 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b2290209-aa67-4111-b3fb-fc2e99015f15-calico-apiserver-certs\") pod \"calico-apiserver-94d85fb65-qss2s\" (UID: \"b2290209-aa67-4111-b3fb-fc2e99015f15\") " pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" May 16 09:43:09.833680 kubelet[2640]: I0516 09:43:09.833655 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a-tigera-ca-bundle\") pod \"calico-kube-controllers-f4c99d75c-2cthn\" (UID: \"2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a\") " pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" May 16 09:43:09.833680 kubelet[2640]: I0516 09:43:09.833675 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pttb\" (UniqueName: \"kubernetes.io/projected/4c6e88f0-444b-48d2-b5b8-1d78a43f3e35-kube-api-access-6pttb\") pod \"calico-apiserver-94d85fb65-2hklb\" (UID: \"4c6e88f0-444b-48d2-b5b8-1d78a43f3e35\") " pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" May 16 09:43:09.833724 kubelet[2640]: I0516 09:43:09.833693 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3a38b1-0ebc-4781-af96-f5ad54d64b1d-config-volume\") pod \"coredns-668d6bf9bc-9wjkd\" (UID: \"ac3a38b1-0ebc-4781-af96-f5ad54d64b1d\") " pod="kube-system/coredns-668d6bf9bc-9wjkd" May 16 09:43:09.833724 kubelet[2640]: I0516 09:43:09.833710 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c6e88f0-444b-48d2-b5b8-1d78a43f3e35-calico-apiserver-certs\") pod \"calico-apiserver-94d85fb65-2hklb\" (UID: \"4c6e88f0-444b-48d2-b5b8-1d78a43f3e35\") " pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" May 16 09:43:09.833789 kubelet[2640]: I0516 09:43:09.833726 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02f2bcc5-81ed-471b-9e0a-68a1089e0f64-config-volume\") pod \"coredns-668d6bf9bc-dnjch\" (UID: \"02f2bcc5-81ed-471b-9e0a-68a1089e0f64\") " pod="kube-system/coredns-668d6bf9bc-dnjch" May 16 09:43:10.086563 containerd[1522]: time="2025-05-16T09:43:10.086507522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-qss2s,Uid:b2290209-aa67-4111-b3fb-fc2e99015f15,Namespace:calico-apiserver,Attempt:0,}" May 16 09:43:10.094318 containerd[1522]: time="2025-05-16T09:43:10.094277111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-2hklb,Uid:4c6e88f0-444b-48d2-b5b8-1d78a43f3e35,Namespace:calico-apiserver,Attempt:0,}" May 16 09:43:10.107446 kubelet[2640]: E0516 09:43:10.106981 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:10.108574 containerd[1522]: time="2025-05-16T09:43:10.108541691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnjch,Uid:02f2bcc5-81ed-471b-9e0a-68a1089e0f64,Namespace:kube-system,Attempt:0,}" May 16 09:43:10.116922 containerd[1522]: time="2025-05-16T09:43:10.115169833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4c99d75c-2cthn,Uid:2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a,Namespace:calico-system,Attempt:0,}" May 16 09:43:10.117180 kubelet[2640]: E0516 09:43:10.117068 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:10.118557 containerd[1522]: time="2025-05-16T09:43:10.118242719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjkd,Uid:ac3a38b1-0ebc-4781-af96-f5ad54d64b1d,Namespace:kube-system,Attempt:0,}" May 16 09:43:10.354311 kubelet[2640]: E0516 09:43:10.354219 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:10.356410 containerd[1522]: time="2025-05-16T09:43:10.356361607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 16 09:43:10.474886 containerd[1522]: time="2025-05-16T09:43:10.474531548Z" level=error msg="Failed to destroy network for sandbox \"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.474886 containerd[1522]: time="2025-05-16T09:43:10.474693698Z" level=error msg="Failed to destroy network for sandbox \"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.479446 containerd[1522]: time="2025-05-16T09:43:10.479384282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjkd,Uid:ac3a38b1-0ebc-4781-af96-f5ad54d64b1d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.479831 kubelet[2640]: E0516 09:43:10.479786 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.480137 containerd[1522]: time="2025-05-16T09:43:10.480104156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnjch,Uid:02f2bcc5-81ed-471b-9e0a-68a1089e0f64,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.480447 kubelet[2640]: E0516 09:43:10.480244 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.481630 containerd[1522]: time="2025-05-16T09:43:10.481472510Z" level=error msg="Failed to destroy network for sandbox \"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.483687 containerd[1522]: time="2025-05-16T09:43:10.483630813Z" level=error msg="Failed to destroy network for sandbox \"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.484296 kubelet[2640]: E0516 09:43:10.484263 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnjch" May 16 09:43:10.484380 kubelet[2640]: E0516 09:43:10.484306 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnjch" May 16 09:43:10.484676 kubelet[2640]: E0516 09:43:10.484526 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dnjch_kube-system(02f2bcc5-81ed-471b-9e0a-68a1089e0f64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dnjch_kube-system(02f2bcc5-81ed-471b-9e0a-68a1089e0f64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6a152acd79f73ee8bcfcaa3f4308aa83485cfb5a08c044ca251d3d0e7a9c5c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dnjch" podUID="02f2bcc5-81ed-471b-9e0a-68a1089e0f64" May 16 09:43:10.484801 containerd[1522]: time="2025-05-16T09:43:10.484398925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-2hklb,Uid:4c6e88f0-444b-48d2-b5b8-1d78a43f3e35,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.484855 kubelet[2640]: E0516 09:43:10.484672 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wjkd" May 16 09:43:10.484855 kubelet[2640]: E0516 09:43:10.484714 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wjkd" May 16 09:43:10.484855 kubelet[2640]: E0516 09:43:10.484775 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9wjkd_kube-system(ac3a38b1-0ebc-4781-af96-f5ad54d64b1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9wjkd_kube-system(ac3a38b1-0ebc-4781-af96-f5ad54d64b1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90becf5e1348b00a380bbb69f85189f29813f98b4d049d9633df83e97d61fe30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9wjkd" podUID="ac3a38b1-0ebc-4781-af96-f5ad54d64b1d" May 16 09:43:10.484946 kubelet[2640]: E0516 09:43:10.484831 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.485843 kubelet[2640]: E0516 09:43:10.485811 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" May 16 09:43:10.485843 kubelet[2640]: E0516 09:43:10.485841 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" May 16 09:43:10.485919 kubelet[2640]: E0516 09:43:10.485879 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94d85fb65-2hklb_calico-apiserver(4c6e88f0-444b-48d2-b5b8-1d78a43f3e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94d85fb65-2hklb_calico-apiserver(4c6e88f0-444b-48d2-b5b8-1d78a43f3e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1073330a292e6c30cb9e5104a939984026a81660f4864a1c0e589af8f179d9f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" podUID="4c6e88f0-444b-48d2-b5b8-1d78a43f3e35" May 16 09:43:10.486356 containerd[1522]: time="2025-05-16T09:43:10.486318044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-qss2s,Uid:b2290209-aa67-4111-b3fb-fc2e99015f15,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.486808 kubelet[2640]: E0516 09:43:10.486730 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.486808 kubelet[2640]: E0516 09:43:10.486787 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" May 16 09:43:10.486808 kubelet[2640]: E0516 09:43:10.486802 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" May 16 09:43:10.486921 kubelet[2640]: E0516 09:43:10.486836 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94d85fb65-qss2s_calico-apiserver(b2290209-aa67-4111-b3fb-fc2e99015f15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94d85fb65-qss2s_calico-apiserver(b2290209-aa67-4111-b3fb-fc2e99015f15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"307e043668e1131b4acc74874e20a9eb72efb125beb4cc1edd5d04407b4c0e79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" podUID="b2290209-aa67-4111-b3fb-fc2e99015f15" May 16 09:43:10.490873 containerd[1522]: time="2025-05-16T09:43:10.490837039Z" level=error msg="Failed to destroy network for sandbox \"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.491931 containerd[1522]: time="2025-05-16T09:43:10.491890612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4c99d75c-2cthn,Uid:2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.492105 kubelet[2640]: E0516 09:43:10.492078 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:10.492158 kubelet[2640]: E0516 09:43:10.492120 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" May 16 09:43:10.492158 kubelet[2640]: E0516 09:43:10.492140 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" May 16 09:43:10.492203 kubelet[2640]: E0516 09:43:10.492171 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f4c99d75c-2cthn_calico-system(2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f4c99d75c-2cthn_calico-system(2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d429c798e81b960d5064331cf43429bb4c03b1e280b0f1bae15ed94804bf0c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" podUID="2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a" May 16 09:43:11.080738 systemd[1]: run-netns-cni\x2d3eff990f\x2d02be\x2da93b\x2d39f4\x2dacfe3b2af846.mount: Deactivated successfully. May 16 09:43:11.080834 systemd[1]: run-netns-cni\x2dcfbee163\x2d2d4a\x2dd3f2\x2d1571\x2d207f334ac3dc.mount: Deactivated successfully. May 16 09:43:11.080878 systemd[1]: run-netns-cni\x2da6b1f599\x2d233e\x2daa1d\x2d07fd\x2d4dae1125fa2a.mount: Deactivated successfully. May 16 09:43:11.080918 systemd[1]: run-netns-cni\x2dbb394143\x2df3e1\x2ddccb\x2d7458\x2d76bd70e1c35b.mount: Deactivated successfully. May 16 09:43:11.208139 systemd[1]: Created slice kubepods-besteffort-pod598d8256_d9c2_44b4_9749_6e31a479eb52.slice - libcontainer container kubepods-besteffort-pod598d8256_d9c2_44b4_9749_6e31a479eb52.slice. May 16 09:43:11.210094 containerd[1522]: time="2025-05-16T09:43:11.210052622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzdcv,Uid:598d8256-d9c2-44b4-9749-6e31a479eb52,Namespace:calico-system,Attempt:0,}" May 16 09:43:11.252285 containerd[1522]: time="2025-05-16T09:43:11.252221406Z" level=error msg="Failed to destroy network for sandbox \"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:11.254212 systemd[1]: run-netns-cni\x2d509cd801\x2df3cf\x2d4aec\x2dca5c\x2dbdb489b4613c.mount: Deactivated successfully. May 16 09:43:11.254555 containerd[1522]: time="2025-05-16T09:43:11.254518230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzdcv,Uid:598d8256-d9c2-44b4-9749-6e31a479eb52,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:11.255184 kubelet[2640]: E0516 09:43:11.254730 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 09:43:11.255235 kubelet[2640]: E0516 09:43:11.255209 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzdcv" May 16 09:43:11.255259 kubelet[2640]: E0516 09:43:11.255233 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzdcv" May 16 09:43:11.255305 kubelet[2640]: E0516 09:43:11.255279 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tzdcv_calico-system(598d8256-d9c2-44b4-9749-6e31a479eb52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tzdcv_calico-system(598d8256-d9c2-44b4-9749-6e31a479eb52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a218d71bd8ee9e14724c20590db87e481240e7a1babf2f130a55fb0b077d0471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tzdcv" podUID="598d8256-d9c2-44b4-9749-6e31a479eb52" May 16 09:43:12.645387 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:44616.service - OpenSSH per-connection server daemon (10.0.0.1:44616). May 16 09:43:12.740667 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 44616 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:12.742049 sshd-session[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:12.746204 systemd-logind[1512]: New session 9 of user core. May 16 09:43:12.760917 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 09:43:12.869793 sshd[3699]: Connection closed by 10.0.0.1 port 44616 May 16 09:43:12.870310 sshd-session[3697]: pam_unix(sshd:session): session closed for user core May 16 09:43:12.874200 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:44616.service: Deactivated successfully. May 16 09:43:12.876342 systemd[1]: session-9.scope: Deactivated successfully. May 16 09:43:12.877132 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. May 16 09:43:12.878292 systemd-logind[1512]: Removed session 9. May 16 09:43:17.881352 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:44620.service - OpenSSH per-connection server daemon (10.0.0.1:44620). May 16 09:43:18.021379 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 44620 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:18.022646 sshd-session[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:18.029091 systemd-logind[1512]: New session 10 of user core. May 16 09:43:18.033976 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 09:43:18.166565 sshd[3723]: Connection closed by 10.0.0.1 port 44620 May 16 09:43:18.166825 sshd-session[3721]: pam_unix(sshd:session): session closed for user core May 16 09:43:18.170448 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:44620.service: Deactivated successfully. May 16 09:43:18.173548 systemd[1]: session-10.scope: Deactivated successfully. May 16 09:43:18.174909 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. May 16 09:43:18.176608 systemd-logind[1512]: Removed session 10. May 16 09:43:18.627509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551714386.mount: Deactivated successfully. May 16 09:43:18.821399 containerd[1522]: time="2025-05-16T09:43:18.821343196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:18.833065 containerd[1522]: time="2025-05-16T09:43:18.821803031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 16 09:43:18.833065 containerd[1522]: time="2025-05-16T09:43:18.822646222Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:18.833405 containerd[1522]: time="2025-05-16T09:43:18.824565120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 8.468163075s" May 16 09:43:18.833405 containerd[1522]: time="2025-05-16T09:43:18.833314421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 16 09:43:18.833557 containerd[1522]: time="2025-05-16T09:43:18.833495699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:18.868583 containerd[1522]: time="2025-05-16T09:43:18.868338826Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 09:43:18.877054 containerd[1522]: time="2025-05-16T09:43:18.876895729Z" level=info msg="Container 50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:18.902250 containerd[1522]: time="2025-05-16T09:43:18.902156004Z" level=info msg="CreateContainer within sandbox \"146fb96ca98b1ea368d172c784b9ffa106666d1ae24536101b7db1e3f9f5f066\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\"" May 16 09:43:18.903686 containerd[1522]: time="2025-05-16T09:43:18.903647187Z" level=info msg="StartContainer for \"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\"" May 16 09:43:18.905257 containerd[1522]: time="2025-05-16T09:43:18.905225649Z" level=info msg="connecting to shim 50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec" address="unix:///run/containerd/s/8046153fef3344e7c132526a1f049bab6a6db32fd9db5dbb7c0053409cf4de41" protocol=ttrpc version=3 May 16 09:43:18.927900 systemd[1]: Started cri-containerd-50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec.scope - libcontainer container 50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec. May 16 09:43:18.966705 containerd[1522]: time="2025-05-16T09:43:18.965026534Z" level=info msg="StartContainer for \"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\" returns successfully" May 16 09:43:19.139976 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 09:43:19.140071 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 09:43:19.375472 kubelet[2640]: E0516 09:43:19.375428 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:19.411718 kubelet[2640]: I0516 09:43:19.411615 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d5w78" podStartSLOduration=1.599554981 podStartE2EDuration="28.411598778s" podCreationTimestamp="2025-05-16 09:42:51 +0000 UTC" firstStartedPulling="2025-05-16 09:42:52.04021317 +0000 UTC m=+13.907940881" lastFinishedPulling="2025-05-16 09:43:18.852256967 +0000 UTC m=+40.719984678" observedRunningTime="2025-05-16 09:43:19.404161659 +0000 UTC m=+41.271889410" watchObservedRunningTime="2025-05-16 09:43:19.411598778 +0000 UTC m=+41.279326489" May 16 09:43:19.473660 containerd[1522]: time="2025-05-16T09:43:19.473617256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\" id:\"2ae91dd32063b855581c8e99a8d700f938386644911a10c7ffefff18cdb9b928\" pid:3809 exit_status:1 exited_at:{seconds:1747388599 nanos:473345659}" May 16 09:43:20.376779 kubelet[2640]: E0516 09:43:20.376704 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:20.526537 containerd[1522]: time="2025-05-16T09:43:20.526495689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\" id:\"29c05672cf9cdb3fb193955283163e856b14f70c9ecbec0534f55abcce95131a\" pid:3920 exit_status:1 exited_at:{seconds:1747388600 nanos:524598910}" May 16 09:43:20.758971 systemd-networkd[1464]: vxlan.calico: Link UP May 16 09:43:20.758978 systemd-networkd[1464]: vxlan.calico: Gained carrier May 16 09:43:21.204409 kubelet[2640]: E0516 09:43:21.204226 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:21.204732 containerd[1522]: time="2025-05-16T09:43:21.204688263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-qss2s,Uid:b2290209-aa67-4111-b3fb-fc2e99015f15,Namespace:calico-apiserver,Attempt:0,}" May 16 09:43:21.205113 containerd[1522]: time="2025-05-16T09:43:21.204700782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnjch,Uid:02f2bcc5-81ed-471b-9e0a-68a1089e0f64,Namespace:kube-system,Attempt:0,}" May 16 09:43:21.390181 kubelet[2640]: E0516 09:43:21.390141 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:21.481246 containerd[1522]: time="2025-05-16T09:43:21.481188229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\" id:\"7a1e1d9e6b956b300032d21b11763329693172f956adb847b7f2c1207f4be62d\" pid:4101 exit_status:1 exited_at:{seconds:1747388601 nanos:480909671}" May 16 09:43:21.541400 systemd-networkd[1464]: cali12d77899cc8: Link UP May 16 09:43:21.541715 systemd-networkd[1464]: cali12d77899cc8: Gained carrier May 16 09:43:21.557312 containerd[1522]: 2025-05-16 09:43:21.343 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0 calico-apiserver-94d85fb65- calico-apiserver b2290209-aa67-4111-b3fb-fc2e99015f15 753 0 2025-05-16 09:42:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94d85fb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-94d85fb65-qss2s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12d77899cc8 [] []}} ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-" May 16 09:43:21.557312 containerd[1522]: 2025-05-16 09:43:21.343 [INFO][4055] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.557312 containerd[1522]: 2025-05-16 09:43:21.488 [INFO][4075] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" HandleID="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Workload="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4075] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" HandleID="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Workload="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-94d85fb65-qss2s", "timestamp":"2025-05-16 09:43:21.488369874 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4075] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.509 [INFO][4075] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" host="localhost" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.513 [INFO][4075] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.520 [INFO][4075] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.522 [INFO][4075] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.523 [INFO][4075] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:21.557851 containerd[1522]: 2025-05-16 09:43:21.523 [INFO][4075] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" host="localhost" May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.525 [INFO][4075] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.528 [INFO][4075] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" host="localhost" May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4075] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" host="localhost" May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4075] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" host="localhost" May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:21.558897 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4075] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" HandleID="k8s-pod-network.f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Workload="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.559012 containerd[1522]: 2025-05-16 09:43:21.536 [INFO][4055] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0", GenerateName:"calico-apiserver-94d85fb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2290209-aa67-4111-b3fb-fc2e99015f15", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94d85fb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-94d85fb65-qss2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d77899cc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:21.559066 containerd[1522]: 2025-05-16 09:43:21.536 [INFO][4055] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.559066 containerd[1522]: 2025-05-16 09:43:21.536 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12d77899cc8 ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.559066 containerd[1522]: 2025-05-16 09:43:21.542 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.559123 containerd[1522]: 2025-05-16 09:43:21.542 [INFO][4055] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0", GenerateName:"calico-apiserver-94d85fb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2290209-aa67-4111-b3fb-fc2e99015f15", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94d85fb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a", Pod:"calico-apiserver-94d85fb65-qss2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12d77899cc8", MAC:"46:8e:cb:89:7e:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:21.559168 containerd[1522]: 2025-05-16 09:43:21.554 [INFO][4055] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-qss2s" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--qss2s-eth0" May 16 09:43:21.607647 containerd[1522]: time="2025-05-16T09:43:21.606239849Z" level=info msg="connecting to shim f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a" address="unix:///run/containerd/s/f28ea5f4d7272ca605c65c7c9657410c2ab276b6f5fa91d6ca79a025a895c3b5" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:21.637042 systemd[1]: Started cri-containerd-f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a.scope - libcontainer container f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a. May 16 09:43:21.645569 systemd-networkd[1464]: cali02a5e153d8d: Link UP May 16 09:43:21.645814 systemd-networkd[1464]: cali02a5e153d8d: Gained carrier May 16 09:43:21.657659 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:21.660459 containerd[1522]: 2025-05-16 09:43:21.342 [INFO][4045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dnjch-eth0 coredns-668d6bf9bc- kube-system 02f2bcc5-81ed-471b-9e0a-68a1089e0f64 755 0 2025-05-16 09:42:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dnjch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02a5e153d8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-" May 16 09:43:21.660459 containerd[1522]: 2025-05-16 09:43:21.342 [INFO][4045] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.660459 containerd[1522]: 2025-05-16 09:43:21.488 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" HandleID="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Workload="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" HandleID="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Workload="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dnjch", "timestamp":"2025-05-16 09:43:21.488361514 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.507 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.533 [INFO][4077] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.614 [INFO][4077] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" host="localhost" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.619 [INFO][4077] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.623 [INFO][4077] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.625 [INFO][4077] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.627 [INFO][4077] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:21.660838 containerd[1522]: 2025-05-16 09:43:21.628 [INFO][4077] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" host="localhost" May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.629 [INFO][4077] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1 May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.632 [INFO][4077] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" host="localhost" May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.637 [INFO][4077] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" host="localhost" May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.637 [INFO][4077] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" host="localhost" May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.637 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:21.661038 containerd[1522]: 2025-05-16 09:43:21.637 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" HandleID="k8s-pod-network.b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Workload="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.661141 containerd[1522]: 2025-05-16 09:43:21.641 [INFO][4045] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dnjch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02f2bcc5-81ed-471b-9e0a-68a1089e0f64", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dnjch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02a5e153d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:21.661189 containerd[1522]: 2025-05-16 09:43:21.642 [INFO][4045] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.661189 containerd[1522]: 2025-05-16 09:43:21.642 [INFO][4045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02a5e153d8d ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.661189 containerd[1522]: 2025-05-16 09:43:21.645 [INFO][4045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.661247 containerd[1522]: 2025-05-16 09:43:21.646 [INFO][4045] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dnjch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02f2bcc5-81ed-471b-9e0a-68a1089e0f64", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1", Pod:"coredns-668d6bf9bc-dnjch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02a5e153d8d", MAC:"46:a2:df:6f:08:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:21.661247 containerd[1522]: 2025-05-16 09:43:21.656 [INFO][4045] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnjch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dnjch-eth0" May 16 09:43:21.684599 containerd[1522]: time="2025-05-16T09:43:21.684566355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-qss2s,Uid:b2290209-aa67-4111-b3fb-fc2e99015f15,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a\"" May 16 09:43:21.691913 containerd[1522]: time="2025-05-16T09:43:21.691885279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 09:43:21.698378 containerd[1522]: time="2025-05-16T09:43:21.698117934Z" level=info msg="connecting to shim b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1" address="unix:///run/containerd/s/7328b77170751ea3f427871cb46d3cf906841bbcfc2e9ffad547e79aeab51306" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:21.720905 systemd[1]: Started cri-containerd-b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1.scope - libcontainer container b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1. May 16 09:43:21.731451 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:21.751999 containerd[1522]: time="2025-05-16T09:43:21.751954894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnjch,Uid:02f2bcc5-81ed-471b-9e0a-68a1089e0f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1\"" May 16 09:43:21.752875 kubelet[2640]: E0516 09:43:21.752852 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:21.766065 containerd[1522]: time="2025-05-16T09:43:21.766040148Z" level=info msg="CreateContainer within sandbox \"b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 09:43:21.782200 containerd[1522]: time="2025-05-16T09:43:21.782162820Z" level=info msg="Container d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:21.789898 containerd[1522]: time="2025-05-16T09:43:21.789863020Z" level=info msg="CreateContainer within sandbox \"b9de74e4d1dfffb507561e296e1d46615af42b647571a0f78b4b62bf9abd83f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4\"" May 16 09:43:21.790519 containerd[1522]: time="2025-05-16T09:43:21.790491734Z" level=info msg="StartContainer for \"d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4\"" May 16 09:43:21.791477 containerd[1522]: time="2025-05-16T09:43:21.791444724Z" level=info msg="connecting to shim d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4" address="unix:///run/containerd/s/7328b77170751ea3f427871cb46d3cf906841bbcfc2e9ffad547e79aeab51306" protocol=ttrpc version=3 May 16 09:43:21.816946 systemd[1]: Started cri-containerd-d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4.scope - libcontainer container d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4. May 16 09:43:21.842203 containerd[1522]: time="2025-05-16T09:43:21.842163437Z" level=info msg="StartContainer for \"d6fc1fcab8b37036cdf22ed271588caf4a16ef27d5ee73d819e6940ce20af9a4\" returns successfully" May 16 09:43:22.133911 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL May 16 09:43:22.204122 kubelet[2640]: E0516 09:43:22.204088 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:22.204546 containerd[1522]: time="2025-05-16T09:43:22.204505088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-2hklb,Uid:4c6e88f0-444b-48d2-b5b8-1d78a43f3e35,Namespace:calico-apiserver,Attempt:0,}" May 16 09:43:22.204914 containerd[1522]: time="2025-05-16T09:43:22.204887204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjkd,Uid:ac3a38b1-0ebc-4781-af96-f5ad54d64b1d,Namespace:kube-system,Attempt:0,}" May 16 09:43:22.350880 systemd-networkd[1464]: cali53447118b1a: Link UP May 16 09:43:22.351033 systemd-networkd[1464]: cali53447118b1a: Gained carrier May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.253 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0 calico-apiserver-94d85fb65- calico-apiserver 4c6e88f0-444b-48d2-b5b8-1d78a43f3e35 757 0 2025-05-16 09:42:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94d85fb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-94d85fb65-2hklb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali53447118b1a [] []}} ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.254 [INFO][4283] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.296 [INFO][4300] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" HandleID="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Workload="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.310 [INFO][4300] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" HandleID="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Workload="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005037f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-94d85fb65-2hklb", "timestamp":"2025-05-16 09:43:22.296985232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.310 [INFO][4300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.311 [INFO][4300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.311 [INFO][4300] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.314 [INFO][4300] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.320 [INFO][4300] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.326 [INFO][4300] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.328 [INFO][4300] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.332 [INFO][4300] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.332 [INFO][4300] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.334 [INFO][4300] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7 May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.339 [INFO][4300] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4300] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4300] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" host="localhost" May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:22.365583 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4300] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" HandleID="k8s-pod-network.365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Workload="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.348 [INFO][4283] cni-plugin/k8s.go 386: Populated endpoint ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0", GenerateName:"calico-apiserver-94d85fb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c6e88f0-444b-48d2-b5b8-1d78a43f3e35", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94d85fb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-94d85fb65-2hklb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53447118b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.348 [INFO][4283] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.348 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53447118b1a ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.350 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.351 [INFO][4283] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0", GenerateName:"calico-apiserver-94d85fb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c6e88f0-444b-48d2-b5b8-1d78a43f3e35", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94d85fb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7", Pod:"calico-apiserver-94d85fb65-2hklb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali53447118b1a", MAC:"4e:32:bb:74:27:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:22.366106 containerd[1522]: 2025-05-16 09:43:22.363 [INFO][4283] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" Namespace="calico-apiserver" Pod="calico-apiserver-94d85fb65-2hklb" WorkloadEndpoint="localhost-k8s-calico--apiserver--94d85fb65--2hklb-eth0" May 16 09:43:22.397344 kubelet[2640]: E0516 09:43:22.397219 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:22.433584 containerd[1522]: time="2025-05-16T09:43:22.433523851Z" level=info msg="connecting to shim 365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7" address="unix:///run/containerd/s/f90a5f438a458206f58531138b494b126b2cbf21593e40f0cbc572791bef2f0b" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:22.448330 kubelet[2640]: I0516 09:43:22.447683 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dnjch" podStartSLOduration=38.447663148 podStartE2EDuration="38.447663148s" podCreationTimestamp="2025-05-16 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:43:22.429648771 +0000 UTC m=+44.297376482" watchObservedRunningTime="2025-05-16 09:43:22.447663148 +0000 UTC m=+44.315390859" May 16 09:43:22.471194 systemd[1]: Started cri-containerd-365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7.scope - libcontainer container 365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7. May 16 09:43:22.478064 systemd-networkd[1464]: cali3cf408fe67b: Link UP May 16 09:43:22.478615 systemd-networkd[1464]: cali3cf408fe67b: Gained carrier May 16 09:43:22.495656 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.255 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0 coredns-668d6bf9bc- kube-system ac3a38b1-0ebc-4781-af96-f5ad54d64b1d 756 0 2025-05-16 09:42:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9wjkd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3cf408fe67b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.255 [INFO][4272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.305 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" HandleID="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Workload="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.321 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" HandleID="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Workload="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9wjkd", "timestamp":"2025-05-16 09:43:22.305262349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.321 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.345 [INFO][4306] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.413 [INFO][4306] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.425 [INFO][4306] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.436 [INFO][4306] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.438 [INFO][4306] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.441 [INFO][4306] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.441 [INFO][4306] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.443 [INFO][4306] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.453 [INFO][4306] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.461 [INFO][4306] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.461 [INFO][4306] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" host="localhost" May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.461 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:22.498770 containerd[1522]: 2025-05-16 09:43:22.462 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" HandleID="k8s-pod-network.82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Workload="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.468 [INFO][4272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac3a38b1-0ebc-4781-af96-f5ad54d64b1d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9wjkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf408fe67b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.468 [INFO][4272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.469 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cf408fe67b ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.478 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.481 [INFO][4272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac3a38b1-0ebc-4781-af96-f5ad54d64b1d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee", Pod:"coredns-668d6bf9bc-9wjkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cf408fe67b", MAC:"9a:84:9f:8d:0f:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:22.499244 containerd[1522]: 2025-05-16 09:43:22.492 [INFO][4272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wjkd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9wjkd-eth0" May 16 09:43:22.529184 containerd[1522]: time="2025-05-16T09:43:22.529144604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94d85fb65-2hklb,Uid:4c6e88f0-444b-48d2-b5b8-1d78a43f3e35,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7\"" May 16 09:43:22.539279 containerd[1522]: time="2025-05-16T09:43:22.539236742Z" level=info msg="connecting to shim 82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee" address="unix:///run/containerd/s/a63f235266c480fe8f4a8f14dd63b78de3518cc7909b6da65cdce384206296fd" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:22.570915 systemd[1]: Started cri-containerd-82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee.scope - libcontainer container 82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee. May 16 09:43:22.582040 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:22.601217 containerd[1522]: time="2025-05-16T09:43:22.601177716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wjkd,Uid:ac3a38b1-0ebc-4781-af96-f5ad54d64b1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee\"" May 16 09:43:22.601914 kubelet[2640]: E0516 09:43:22.601879 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:22.608392 containerd[1522]: time="2025-05-16T09:43:22.608350523Z" level=info msg="CreateContainer within sandbox \"82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 09:43:22.635262 containerd[1522]: time="2025-05-16T09:43:22.635211812Z" level=info msg="Container b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:22.639778 containerd[1522]: time="2025-05-16T09:43:22.639517848Z" level=info msg="CreateContainer within sandbox \"82f02298c55799d5431341e4bf9e104298a4ff3c4360f5ecdfbf59c76cdaf7ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12\"" May 16 09:43:22.640370 containerd[1522]: time="2025-05-16T09:43:22.640338680Z" level=info msg="StartContainer for \"b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12\"" May 16 09:43:22.641375 containerd[1522]: time="2025-05-16T09:43:22.641282790Z" level=info msg="connecting to shim b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12" address="unix:///run/containerd/s/a63f235266c480fe8f4a8f14dd63b78de3518cc7909b6da65cdce384206296fd" protocol=ttrpc version=3 May 16 09:43:22.660921 systemd[1]: Started cri-containerd-b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12.scope - libcontainer container b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12. May 16 09:43:22.707234 containerd[1522]: time="2025-05-16T09:43:22.707124484Z" level=info msg="StartContainer for \"b86ebc3bcc439b8c25d28382c4baaca852937354384a1cce651af1929b419e12\" returns successfully" May 16 09:43:22.709906 systemd-networkd[1464]: cali12d77899cc8: Gained IPv6LL May 16 09:43:23.178437 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:49428.service - OpenSSH per-connection server daemon (10.0.0.1:49428). May 16 09:43:23.227654 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 49428 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:23.229494 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:23.233442 systemd-logind[1512]: New session 11 of user core. May 16 09:43:23.247890 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 09:43:23.370121 sshd[4484]: Connection closed by 10.0.0.1 port 49428 May 16 09:43:23.370673 sshd-session[4482]: pam_unix(sshd:session): session closed for user core May 16 09:43:23.387845 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:49428.service: Deactivated successfully. May 16 09:43:23.389996 systemd[1]: session-11.scope: Deactivated successfully. May 16 09:43:23.390917 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. May 16 09:43:23.394113 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:49438.service - OpenSSH per-connection server daemon (10.0.0.1:49438). May 16 09:43:23.394820 systemd-logind[1512]: Removed session 11. May 16 09:43:23.406707 kubelet[2640]: E0516 09:43:23.406221 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:23.413184 kubelet[2640]: E0516 09:43:23.413156 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:23.416260 kubelet[2640]: I0516 09:43:23.416197 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9wjkd" podStartSLOduration=39.416182787 podStartE2EDuration="39.416182787s" podCreationTimestamp="2025-05-16 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 09:43:23.415017278 +0000 UTC m=+45.282744989" watchObservedRunningTime="2025-05-16 09:43:23.416182787 +0000 UTC m=+45.283910498" May 16 09:43:23.457702 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:23.459187 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:23.472817 systemd-logind[1512]: New session 12 of user core. May 16 09:43:23.478964 systemd-networkd[1464]: cali02a5e153d8d: Gained IPv6LL May 16 09:43:23.480923 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 09:43:23.621890 sshd[4503]: Connection closed by 10.0.0.1 port 49438 May 16 09:43:23.622774 sshd-session[4501]: pam_unix(sshd:session): session closed for user core May 16 09:43:23.637978 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:49438.service: Deactivated successfully. May 16 09:43:23.642640 systemd[1]: session-12.scope: Deactivated successfully. May 16 09:43:23.645455 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. May 16 09:43:23.647731 systemd-logind[1512]: Removed session 12. May 16 09:43:23.649975 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:49450.service - OpenSSH per-connection server daemon (10.0.0.1:49450). May 16 09:43:23.669907 systemd-networkd[1464]: cali53447118b1a: Gained IPv6LL May 16 09:43:23.707692 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 49450 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:23.708843 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:23.715990 systemd-logind[1512]: New session 13 of user core. May 16 09:43:23.726954 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 09:43:23.797925 systemd-networkd[1464]: cali3cf408fe67b: Gained IPv6LL May 16 09:43:23.847905 sshd[4520]: Connection closed by 10.0.0.1 port 49450 May 16 09:43:23.848445 sshd-session[4518]: pam_unix(sshd:session): session closed for user core May 16 09:43:23.851634 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:49450.service: Deactivated successfully. May 16 09:43:23.853266 systemd[1]: session-13.scope: Deactivated successfully. May 16 09:43:23.854611 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. May 16 09:43:23.856217 systemd-logind[1512]: Removed session 13. May 16 09:43:24.204064 containerd[1522]: time="2025-05-16T09:43:24.204017207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4c99d75c-2cthn,Uid:2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a,Namespace:calico-system,Attempt:0,}" May 16 09:43:24.314660 systemd-networkd[1464]: cali30ac42767c9: Link UP May 16 09:43:24.315044 systemd-networkd[1464]: cali30ac42767c9: Gained carrier May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.246 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0 calico-kube-controllers-f4c99d75c- calico-system 2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a 758 0 2025-05-16 09:42:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f4c99d75c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f4c99d75c-2cthn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali30ac42767c9 [] []}} ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.246 [INFO][4534] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.273 [INFO][4549] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" HandleID="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Workload="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.285 [INFO][4549] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" HandleID="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Workload="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027bf60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f4c99d75c-2cthn", "timestamp":"2025-05-16 09:43:24.273901418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.285 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.285 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.285 [INFO][4549] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.287 [INFO][4549] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.291 [INFO][4549] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.295 [INFO][4549] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.296 [INFO][4549] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.298 [INFO][4549] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.299 [INFO][4549] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.300 [INFO][4549] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635 May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.304 [INFO][4549] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.310 [INFO][4549] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.310 [INFO][4549] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" host="localhost" May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.310 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:24.328505 containerd[1522]: 2025-05-16 09:43:24.310 [INFO][4549] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" HandleID="k8s-pod-network.57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Workload="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.313 [INFO][4534] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0", GenerateName:"calico-kube-controllers-f4c99d75c-", Namespace:"calico-system", SelfLink:"", UID:"2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f4c99d75c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f4c99d75c-2cthn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30ac42767c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.313 [INFO][4534] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.313 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30ac42767c9 ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.315 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.315 [INFO][4534] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0", GenerateName:"calico-kube-controllers-f4c99d75c-", Namespace:"calico-system", SelfLink:"", UID:"2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f4c99d75c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635", Pod:"calico-kube-controllers-f4c99d75c-2cthn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali30ac42767c9", MAC:"2e:29:0a:ec:eb:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:24.329347 containerd[1522]: 2025-05-16 09:43:24.325 [INFO][4534] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" Namespace="calico-system" Pod="calico-kube-controllers-f4c99d75c-2cthn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4c99d75c--2cthn-eth0" May 16 09:43:24.349286 containerd[1522]: time="2025-05-16T09:43:24.349248976Z" level=info msg="connecting to shim 57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635" address="unix:///run/containerd/s/bd461cab5bb78e4be170bd44fa3efb396c3f863c9d8ed2bd9f16f6c2a4cd1080" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:24.373900 systemd[1]: Started cri-containerd-57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635.scope - libcontainer container 57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635. May 16 09:43:24.388329 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:24.411652 containerd[1522]: time="2025-05-16T09:43:24.411610579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4c99d75c-2cthn,Uid:2a6b8852-00e7-4b63-90e3-1ab7fd7eb08a,Namespace:calico-system,Attempt:0,} returns sandbox id \"57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635\"" May 16 09:43:24.418489 kubelet[2640]: E0516 09:43:24.418442 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:24.419699 kubelet[2640]: E0516 09:43:24.418456 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:25.781976 systemd-networkd[1464]: cali30ac42767c9: Gained IPv6LL May 16 09:43:26.204052 containerd[1522]: time="2025-05-16T09:43:26.203896292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzdcv,Uid:598d8256-d9c2-44b4-9749-6e31a479eb52,Namespace:calico-system,Attempt:0,}" May 16 09:43:26.310287 systemd-networkd[1464]: calide12b5df560: Link UP May 16 09:43:26.311003 systemd-networkd[1464]: calide12b5df560: Gained carrier May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.242 [INFO][4622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tzdcv-eth0 csi-node-driver- calico-system 598d8256-d9c2-44b4-9749-6e31a479eb52 599 0 2025-05-16 09:42:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tzdcv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calide12b5df560 [] []}} ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.242 [INFO][4622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.267 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" HandleID="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Workload="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.278 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" HandleID="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Workload="localhost-k8s-csi--node--driver--tzdcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tzdcv", "timestamp":"2025-05-16 09:43:26.267278276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.279 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.279 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.279 [INFO][4637] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.281 [INFO][4637] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.285 [INFO][4637] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.290 [INFO][4637] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.292 [INFO][4637] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.294 [INFO][4637] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.294 [INFO][4637] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.295 [INFO][4637] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9 May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.298 [INFO][4637] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.304 [INFO][4637] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.304 [INFO][4637] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" host="localhost" May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.304 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 09:43:26.322902 containerd[1522]: 2025-05-16 09:43:26.304 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" HandleID="k8s-pod-network.c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Workload="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.306 [INFO][4622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzdcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"598d8256-d9c2-44b4-9749-6e31a479eb52", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tzdcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide12b5df560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.306 [INFO][4622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.306 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide12b5df560 ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.311 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.312 [INFO][4622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzdcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"598d8256-d9c2-44b4-9749-6e31a479eb52", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 9, 42, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9", Pod:"csi-node-driver-tzdcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide12b5df560", MAC:"92:ff:3e:74:24:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 16 09:43:26.323421 containerd[1522]: 2025-05-16 09:43:26.319 [INFO][4622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" Namespace="calico-system" Pod="csi-node-driver-tzdcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzdcv-eth0" May 16 09:43:26.349214 containerd[1522]: time="2025-05-16T09:43:26.349165613Z" level=info msg="connecting to shim c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9" address="unix:///run/containerd/s/ce22c8dd826996b71b83d872e4b7c9f5e9f11c9fb6c36a6a4cf4cc217727da5e" namespace=k8s.io protocol=ttrpc version=3 May 16 09:43:26.371896 systemd[1]: Started cri-containerd-c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9.scope - libcontainer container c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9. May 16 09:43:26.381505 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 09:43:26.392816 containerd[1522]: time="2025-05-16T09:43:26.392782898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzdcv,Uid:598d8256-d9c2-44b4-9749-6e31a479eb52,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9\"" May 16 09:43:27.958134 systemd-networkd[1464]: calide12b5df560: Gained IPv6LL May 16 09:43:28.869011 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:49458.service - OpenSSH per-connection server daemon (10.0.0.1:49458). May 16 09:43:28.945860 sshd[4709]: Accepted publickey for core from 10.0.0.1 port 49458 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:28.947619 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:28.955301 systemd-logind[1512]: New session 14 of user core. May 16 09:43:28.960978 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 09:43:29.099478 sshd[4714]: Connection closed by 10.0.0.1 port 49458 May 16 09:43:29.099995 sshd-session[4709]: pam_unix(sshd:session): session closed for user core May 16 09:43:29.103186 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:49458.service: Deactivated successfully. May 16 09:43:29.105018 systemd[1]: session-14.scope: Deactivated successfully. May 16 09:43:29.107088 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. May 16 09:43:29.108048 systemd-logind[1512]: Removed session 14. May 16 09:43:31.628408 containerd[1522]: time="2025-05-16T09:43:31.628025060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:31.628408 containerd[1522]: time="2025-05-16T09:43:31.628398457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 16 09:43:31.629307 containerd[1522]: time="2025-05-16T09:43:31.629255810Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:31.631171 containerd[1522]: time="2025-05-16T09:43:31.631112196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:31.631896 containerd[1522]: time="2025-05-16T09:43:31.631866030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 9.939842473s" May 16 09:43:31.631896 containerd[1522]: time="2025-05-16T09:43:31.631895149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 16 09:43:31.632757 containerd[1522]: time="2025-05-16T09:43:31.632727063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 16 09:43:31.638456 containerd[1522]: time="2025-05-16T09:43:31.638417377Z" level=info msg="CreateContainer within sandbox \"f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 09:43:31.647773 containerd[1522]: time="2025-05-16T09:43:31.646447914Z" level=info msg="Container 63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:31.653282 containerd[1522]: time="2025-05-16T09:43:31.653239900Z" level=info msg="CreateContainer within sandbox \"f591e8319e07a85517cde7d5df20c1176d160d547b8720647859c3a9526d201a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701\"" May 16 09:43:31.653939 containerd[1522]: time="2025-05-16T09:43:31.653886095Z" level=info msg="StartContainer for \"63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701\"" May 16 09:43:31.655901 containerd[1522]: time="2025-05-16T09:43:31.655857359Z" level=info msg="connecting to shim 63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701" address="unix:///run/containerd/s/f28ea5f4d7272ca605c65c7c9657410c2ab276b6f5fa91d6ca79a025a895c3b5" protocol=ttrpc version=3 May 16 09:43:31.673908 systemd[1]: Started cri-containerd-63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701.scope - libcontainer container 63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701. May 16 09:43:31.707690 containerd[1522]: time="2025-05-16T09:43:31.707653547Z" level=info msg="StartContainer for \"63359584cf88c23ffeec3219643412817ae38ef7c7a81ed66bf3aea058cac701\" returns successfully" May 16 09:43:32.116617 containerd[1522]: time="2025-05-16T09:43:32.116573960Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:32.117733 containerd[1522]: time="2025-05-16T09:43:32.117703832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 16 09:43:32.119274 containerd[1522]: time="2025-05-16T09:43:32.119248140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 486.480318ms" May 16 09:43:32.119455 containerd[1522]: time="2025-05-16T09:43:32.119355699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 16 09:43:32.120607 containerd[1522]: time="2025-05-16T09:43:32.120582889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 16 09:43:32.122684 containerd[1522]: time="2025-05-16T09:43:32.122656313Z" level=info msg="CreateContainer within sandbox \"365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 09:43:32.129714 containerd[1522]: time="2025-05-16T09:43:32.128737466Z" level=info msg="Container faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:32.135582 containerd[1522]: time="2025-05-16T09:43:32.135484014Z" level=info msg="CreateContainer within sandbox \"365ae6f5d5055060bb170bb0b44c76be1c5917bece7d393d7081a200c280f6d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0\"" May 16 09:43:32.136138 containerd[1522]: time="2025-05-16T09:43:32.136088289Z" level=info msg="StartContainer for \"faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0\"" May 16 09:43:32.142639 containerd[1522]: time="2025-05-16T09:43:32.142596559Z" level=info msg="connecting to shim faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0" address="unix:///run/containerd/s/f90a5f438a458206f58531138b494b126b2cbf21593e40f0cbc572791bef2f0b" protocol=ttrpc version=3 May 16 09:43:32.165901 systemd[1]: Started cri-containerd-faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0.scope - libcontainer container faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0. May 16 09:43:32.202098 containerd[1522]: time="2025-05-16T09:43:32.202060978Z" level=info msg="StartContainer for \"faf0563b2697164fa1544795305b3020dd06dfb21df3d63ceda365456d1cafe0\" returns successfully" May 16 09:43:32.482033 kubelet[2640]: I0516 09:43:32.481880 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94d85fb65-qss2s" podStartSLOduration=32.540007917 podStartE2EDuration="42.481785372s" podCreationTimestamp="2025-05-16 09:42:50 +0000 UTC" firstStartedPulling="2025-05-16 09:43:21.690835849 +0000 UTC m=+43.558563560" lastFinishedPulling="2025-05-16 09:43:31.632613304 +0000 UTC m=+53.500341015" observedRunningTime="2025-05-16 09:43:32.467772761 +0000 UTC m=+54.335500512" watchObservedRunningTime="2025-05-16 09:43:32.481785372 +0000 UTC m=+54.349513083" May 16 09:43:32.648134 kubelet[2640]: I0516 09:43:32.648070 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94d85fb65-2hklb" podStartSLOduration=33.060295284 podStartE2EDuration="42.648051885s" podCreationTimestamp="2025-05-16 09:42:50 +0000 UTC" firstStartedPulling="2025-05-16 09:43:22.532399292 +0000 UTC m=+44.400127003" lastFinishedPulling="2025-05-16 09:43:32.120155893 +0000 UTC m=+53.987883604" observedRunningTime="2025-05-16 09:43:32.483173281 +0000 UTC m=+54.350900992" watchObservedRunningTime="2025-05-16 09:43:32.648051885 +0000 UTC m=+54.515779556" May 16 09:43:34.111077 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:33046.service - OpenSSH per-connection server daemon (10.0.0.1:33046). May 16 09:43:34.171305 sshd[4821]: Accepted publickey for core from 10.0.0.1 port 33046 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:34.173866 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:34.182820 systemd-logind[1512]: New session 15 of user core. May 16 09:43:34.189128 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 09:43:34.377476 sshd[4827]: Connection closed by 10.0.0.1 port 33046 May 16 09:43:34.378006 sshd-session[4821]: pam_unix(sshd:session): session closed for user core May 16 09:43:34.383086 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:33046.service: Deactivated successfully. May 16 09:43:34.386171 systemd[1]: session-15.scope: Deactivated successfully. May 16 09:43:34.388276 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. May 16 09:43:34.390812 systemd-logind[1512]: Removed session 15. May 16 09:43:34.612014 containerd[1522]: time="2025-05-16T09:43:34.611967872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:34.612763 containerd[1522]: time="2025-05-16T09:43:34.612452189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 16 09:43:34.613774 containerd[1522]: time="2025-05-16T09:43:34.613718260Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:34.616584 containerd[1522]: time="2025-05-16T09:43:34.616494759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:34.617025 containerd[1522]: time="2025-05-16T09:43:34.616996915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.496171508s" May 16 09:43:34.617125 containerd[1522]: time="2025-05-16T09:43:34.617109955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 16 09:43:34.618731 containerd[1522]: time="2025-05-16T09:43:34.618711023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 16 09:43:34.625162 containerd[1522]: time="2025-05-16T09:43:34.625141736Z" level=info msg="CreateContainer within sandbox \"57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 09:43:34.634989 containerd[1522]: time="2025-05-16T09:43:34.634895344Z" level=info msg="Container 6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:34.640818 containerd[1522]: time="2025-05-16T09:43:34.640778861Z" level=info msg="CreateContainer within sandbox \"57ba23419277872d32e5928ec418d6b5ce1b4270682f0dad962ad77a07476635\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3\"" May 16 09:43:34.641223 containerd[1522]: time="2025-05-16T09:43:34.641200937Z" level=info msg="StartContainer for \"6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3\"" May 16 09:43:34.642279 containerd[1522]: time="2025-05-16T09:43:34.642246490Z" level=info msg="connecting to shim 6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3" address="unix:///run/containerd/s/bd461cab5bb78e4be170bd44fa3efb396c3f863c9d8ed2bd9f16f6c2a4cd1080" protocol=ttrpc version=3 May 16 09:43:34.667926 systemd[1]: Started cri-containerd-6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3.scope - libcontainer container 6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3. May 16 09:43:34.705638 containerd[1522]: time="2025-05-16T09:43:34.705598024Z" level=info msg="StartContainer for \"6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3\" returns successfully" May 16 09:43:35.491603 kubelet[2640]: I0516 09:43:35.491362 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f4c99d75c-2cthn" podStartSLOduration=34.286474632 podStartE2EDuration="44.491343137s" podCreationTimestamp="2025-05-16 09:42:51 +0000 UTC" firstStartedPulling="2025-05-16 09:43:24.41355904 +0000 UTC m=+46.281286711" lastFinishedPulling="2025-05-16 09:43:34.618427545 +0000 UTC m=+56.486155216" observedRunningTime="2025-05-16 09:43:35.49087518 +0000 UTC m=+57.358602931" watchObservedRunningTime="2025-05-16 09:43:35.491343137 +0000 UTC m=+57.359070848" May 16 09:43:36.498596 containerd[1522]: time="2025-05-16T09:43:36.498560127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3\" id:\"5d89e833cd87dfa3652bc75993d8a89ef2294e8bc14a064d38e4d3e9665ef72d\" pid:4890 exited_at:{seconds:1747388616 nanos:498295448}" May 16 09:43:39.389180 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:33056.service - OpenSSH per-connection server daemon (10.0.0.1:33056). May 16 09:43:39.468924 sshd[4903]: Accepted publickey for core from 10.0.0.1 port 33056 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:39.470898 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:39.476215 systemd-logind[1512]: New session 16 of user core. May 16 09:43:39.489616 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 09:43:39.537929 containerd[1522]: time="2025-05-16T09:43:39.537882959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:39.538805 containerd[1522]: time="2025-05-16T09:43:39.538740593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 16 09:43:39.539917 containerd[1522]: time="2025-05-16T09:43:39.539878266Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:39.541941 containerd[1522]: time="2025-05-16T09:43:39.541909373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:39.543512 containerd[1522]: time="2025-05-16T09:43:39.543477403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 4.924622981s" May 16 09:43:39.543512 containerd[1522]: time="2025-05-16T09:43:39.543510882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 16 09:43:39.545643 containerd[1522]: time="2025-05-16T09:43:39.545405070Z" level=info msg="CreateContainer within sandbox \"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 09:43:39.556041 containerd[1522]: time="2025-05-16T09:43:39.556005601Z" level=info msg="Container 1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:39.566423 containerd[1522]: time="2025-05-16T09:43:39.566372254Z" level=info msg="CreateContainer within sandbox \"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd\"" May 16 09:43:39.566967 containerd[1522]: time="2025-05-16T09:43:39.566921811Z" level=info msg="StartContainer for \"1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd\"" May 16 09:43:39.568925 containerd[1522]: time="2025-05-16T09:43:39.568897278Z" level=info msg="connecting to shim 1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd" address="unix:///run/containerd/s/ce22c8dd826996b71b83d872e4b7c9f5e9f11c9fb6c36a6a4cf4cc217727da5e" protocol=ttrpc version=3 May 16 09:43:39.597910 systemd[1]: Started cri-containerd-1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd.scope - libcontainer container 1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd. May 16 09:43:39.634524 containerd[1522]: time="2025-05-16T09:43:39.634447373Z" level=info msg="StartContainer for \"1def86d9cabeabde3031ca339177fc2771ef64342d84b5601d9ee58a8dbc89cd\" returns successfully" May 16 09:43:39.637709 containerd[1522]: time="2025-05-16T09:43:39.637680552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 16 09:43:39.671855 sshd[4909]: Connection closed by 10.0.0.1 port 33056 May 16 09:43:39.671613 sshd-session[4903]: pam_unix(sshd:session): session closed for user core May 16 09:43:39.674865 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:33056.service: Deactivated successfully. May 16 09:43:39.678066 systemd[1]: session-16.scope: Deactivated successfully. May 16 09:43:39.680586 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. May 16 09:43:39.681857 systemd-logind[1512]: Removed session 16. May 16 09:43:41.369637 containerd[1522]: time="2025-05-16T09:43:41.369491443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:41.370430 containerd[1522]: time="2025-05-16T09:43:41.370402517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 16 09:43:41.371399 containerd[1522]: time="2025-05-16T09:43:41.371333472Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:41.374050 containerd[1522]: time="2025-05-16T09:43:41.374011455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 09:43:41.376120 containerd[1522]: time="2025-05-16T09:43:41.375934243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.738220531s" May 16 09:43:41.376120 containerd[1522]: time="2025-05-16T09:43:41.375964723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 16 09:43:41.377874 containerd[1522]: time="2025-05-16T09:43:41.377847831Z" level=info msg="CreateContainer within sandbox \"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 09:43:41.386825 containerd[1522]: time="2025-05-16T09:43:41.385879662Z" level=info msg="Container 97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f: CDI devices from CRI Config.CDIDevices: []" May 16 09:43:41.394210 containerd[1522]: time="2025-05-16T09:43:41.394177771Z" level=info msg="CreateContainer within sandbox \"c5b939894a1b7822d2de9ff0e3ba42d5c986f7231aa44133f934c8c7edd270b9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f\"" May 16 09:43:41.395011 containerd[1522]: time="2025-05-16T09:43:41.394954846Z" level=info msg="StartContainer for \"97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f\"" May 16 09:43:41.396386 containerd[1522]: time="2025-05-16T09:43:41.396361797Z" level=info msg="connecting to shim 97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f" address="unix:///run/containerd/s/ce22c8dd826996b71b83d872e4b7c9f5e9f11c9fb6c36a6a4cf4cc217727da5e" protocol=ttrpc version=3 May 16 09:43:41.423918 systemd[1]: Started cri-containerd-97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f.scope - libcontainer container 97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f. May 16 09:43:41.473825 containerd[1522]: time="2025-05-16T09:43:41.471657693Z" level=info msg="StartContainer for \"97e5dbbce8b70e1a9284a8a936a274b6ed40f713c98e812a67c96d85009fb05f\" returns successfully" May 16 09:43:41.493802 kubelet[2640]: I0516 09:43:41.493721 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tzdcv" podStartSLOduration=35.511246283 podStartE2EDuration="50.493703397s" podCreationTimestamp="2025-05-16 09:42:51 +0000 UTC" firstStartedPulling="2025-05-16 09:43:26.394263004 +0000 UTC m=+48.261990715" lastFinishedPulling="2025-05-16 09:43:41.376720118 +0000 UTC m=+63.244447829" observedRunningTime="2025-05-16 09:43:41.493685717 +0000 UTC m=+63.361413468" watchObservedRunningTime="2025-05-16 09:43:41.493703397 +0000 UTC m=+63.361431108" May 16 09:43:42.319258 kubelet[2640]: I0516 09:43:42.319204 2640 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 09:43:42.319258 kubelet[2640]: I0516 09:43:42.319259 2640 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 09:43:44.684127 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:53826.service - OpenSSH per-connection server daemon (10.0.0.1:53826). May 16 09:43:44.742188 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 53826 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:44.743762 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:44.747831 systemd-logind[1512]: New session 17 of user core. May 16 09:43:44.760917 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 09:43:44.970780 sshd[4999]: Connection closed by 10.0.0.1 port 53826 May 16 09:43:44.971040 sshd-session[4997]: pam_unix(sshd:session): session closed for user core May 16 09:43:44.974974 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:53826.service: Deactivated successfully. May 16 09:43:44.977193 systemd[1]: session-17.scope: Deactivated successfully. May 16 09:43:44.979338 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. May 16 09:43:44.980635 systemd-logind[1512]: Removed session 17. May 16 09:43:49.204401 kubelet[2640]: E0516 09:43:49.204328 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:49.982262 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:53832.service - OpenSSH per-connection server daemon (10.0.0.1:53832). May 16 09:43:50.042103 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 53832 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:50.043399 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:50.047063 systemd-logind[1512]: New session 18 of user core. May 16 09:43:50.056901 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 09:43:50.189106 sshd[5021]: Connection closed by 10.0.0.1 port 53832 May 16 09:43:50.189787 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 16 09:43:50.195695 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:53832.service: Deactivated successfully. May 16 09:43:50.198466 systemd[1]: session-18.scope: Deactivated successfully. May 16 09:43:50.199565 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. May 16 09:43:50.201015 systemd-logind[1512]: Removed session 18. May 16 09:43:51.461835 containerd[1522]: time="2025-05-16T09:43:51.461796801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c3c3d14a47c7dc3409b9e79ad99c7196aecd6d3cc7b9f67cf17cab81728aec\" id:\"62765242c86fa34cd02037f3ab58540dea11a896ddb7084d046b058756993d76\" pid:5047 exited_at:{seconds:1747388631 nanos:460984165}" May 16 09:43:51.466595 kubelet[2640]: E0516 09:43:51.466420 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:43:55.201361 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:55996.service - OpenSSH per-connection server daemon (10.0.0.1:55996). May 16 09:43:55.247794 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 55996 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:55.249190 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:55.252905 systemd-logind[1512]: New session 19 of user core. May 16 09:43:55.264971 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 09:43:55.416484 sshd[5066]: Connection closed by 10.0.0.1 port 55996 May 16 09:43:55.417878 sshd-session[5064]: pam_unix(sshd:session): session closed for user core May 16 09:43:55.425917 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:55996.service: Deactivated successfully. May 16 09:43:55.428231 systemd[1]: session-19.scope: Deactivated successfully. May 16 09:43:55.428941 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. May 16 09:43:55.432989 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:56004.service - OpenSSH per-connection server daemon (10.0.0.1:56004). May 16 09:43:55.434105 systemd-logind[1512]: Removed session 19. May 16 09:43:55.487198 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 56004 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:55.488378 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:55.492197 systemd-logind[1512]: New session 20 of user core. May 16 09:43:55.501889 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 09:43:55.710591 sshd[5081]: Connection closed by 10.0.0.1 port 56004 May 16 09:43:55.711899 sshd-session[5079]: pam_unix(sshd:session): session closed for user core May 16 09:43:55.725168 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:56004.service: Deactivated successfully. May 16 09:43:55.727093 systemd[1]: session-20.scope: Deactivated successfully. May 16 09:43:55.727911 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. May 16 09:43:55.730562 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018). May 16 09:43:55.731434 systemd-logind[1512]: Removed session 20. May 16 09:43:55.791666 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:55.793540 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:55.797284 systemd-logind[1512]: New session 21 of user core. May 16 09:43:55.808896 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 09:43:56.542712 sshd[5095]: Connection closed by 10.0.0.1 port 56018 May 16 09:43:56.543078 sshd-session[5093]: pam_unix(sshd:session): session closed for user core May 16 09:43:56.555784 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:56018.service: Deactivated successfully. May 16 09:43:56.558952 systemd[1]: session-21.scope: Deactivated successfully. May 16 09:43:56.561004 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. May 16 09:43:56.564865 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:56022.service - OpenSSH per-connection server daemon (10.0.0.1:56022). May 16 09:43:56.567925 systemd-logind[1512]: Removed session 21. May 16 09:43:56.620713 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 56022 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:56.622120 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:56.626225 systemd-logind[1512]: New session 22 of user core. May 16 09:43:56.631930 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 09:43:56.875678 sshd[5117]: Connection closed by 10.0.0.1 port 56022 May 16 09:43:56.876249 sshd-session[5114]: pam_unix(sshd:session): session closed for user core May 16 09:43:56.889118 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:56022.service: Deactivated successfully. May 16 09:43:56.890678 systemd[1]: session-22.scope: Deactivated successfully. May 16 09:43:56.891652 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. May 16 09:43:56.894167 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:56024.service - OpenSSH per-connection server daemon (10.0.0.1:56024). May 16 09:43:56.895594 systemd-logind[1512]: Removed session 22. May 16 09:43:56.944867 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 56024 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:43:56.946279 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:43:56.951903 systemd-logind[1512]: New session 23 of user core. May 16 09:43:56.965946 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 09:43:57.089078 sshd[5131]: Connection closed by 10.0.0.1 port 56024 May 16 09:43:57.089409 sshd-session[5129]: pam_unix(sshd:session): session closed for user core May 16 09:43:57.093425 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:56024.service: Deactivated successfully. May 16 09:43:57.096355 systemd[1]: session-23.scope: Deactivated successfully. May 16 09:43:57.097246 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. May 16 09:43:57.098766 systemd-logind[1512]: Removed session 23. May 16 09:44:02.104186 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:56032.service - OpenSSH per-connection server daemon (10.0.0.1:56032). May 16 09:44:02.150432 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 56032 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:44:02.151699 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:44:02.155874 systemd-logind[1512]: New session 24 of user core. May 16 09:44:02.162886 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 09:44:02.287018 sshd[5153]: Connection closed by 10.0.0.1 port 56032 May 16 09:44:02.287362 sshd-session[5151]: pam_unix(sshd:session): session closed for user core May 16 09:44:02.290146 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:56032.service: Deactivated successfully. May 16 09:44:02.291823 systemd[1]: session-24.scope: Deactivated successfully. May 16 09:44:02.294784 systemd-logind[1512]: Session 24 logged out. Waiting for processes to exit. May 16 09:44:02.296598 systemd-logind[1512]: Removed session 24. May 16 09:44:06.500810 containerd[1522]: time="2025-05-16T09:44:06.500773162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6341e7f0b5fd1a997efaa8cdeb0b41f182e7eaa574a48be4767c5d354d23d0e3\" id:\"fa743a3b651ab387ea48038f9137f2bcc1d6a8d505676aed356f6d582989dd5f\" pid:5184 exited_at:{seconds:1747388646 nanos:500560563}" May 16 09:44:07.301555 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:42472.service - OpenSSH per-connection server daemon (10.0.0.1:42472). May 16 09:44:07.368019 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 42472 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:44:07.369516 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:44:07.375406 systemd-logind[1512]: New session 25 of user core. May 16 09:44:07.383101 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 09:44:07.533018 sshd[5197]: Connection closed by 10.0.0.1 port 42472 May 16 09:44:07.533935 sshd-session[5195]: pam_unix(sshd:session): session closed for user core May 16 09:44:07.537785 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:42472.service: Deactivated successfully. May 16 09:44:07.539944 systemd[1]: session-25.scope: Deactivated successfully. May 16 09:44:07.543832 systemd-logind[1512]: Session 25 logged out. Waiting for processes to exit. May 16 09:44:07.545718 systemd-logind[1512]: Removed session 25. May 16 09:44:08.204830 kubelet[2640]: E0516 09:44:08.204795 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:44:10.206701 kubelet[2640]: E0516 09:44:10.206654 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 09:44:12.557131 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:48888.service - OpenSSH per-connection server daemon (10.0.0.1:48888). May 16 09:44:12.627645 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 48888 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:44:12.629140 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:44:12.632957 systemd-logind[1512]: New session 26 of user core. May 16 09:44:12.640902 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 09:44:12.811569 sshd[5212]: Connection closed by 10.0.0.1 port 48888 May 16 09:44:12.811994 sshd-session[5210]: pam_unix(sshd:session): session closed for user core May 16 09:44:12.819530 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:48888.service: Deactivated successfully. May 16 09:44:12.821582 systemd[1]: session-26.scope: Deactivated successfully. May 16 09:44:12.823505 systemd-logind[1512]: Session 26 logged out. Waiting for processes to exit. May 16 09:44:12.825048 systemd-logind[1512]: Removed session 26. May 16 09:44:17.827128 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:48894.service - OpenSSH per-connection server daemon (10.0.0.1:48894). May 16 09:44:17.898131 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 48894 ssh2: RSA SHA256:b3FRdJnMtZ1pZz78i8Z6z+eC4CmRz2zm08X1BjD77Xs May 16 09:44:17.899362 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 09:44:17.903433 systemd-logind[1512]: New session 27 of user core. May 16 09:44:17.917949 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 09:44:18.049187 sshd[5229]: Connection closed by 10.0.0.1 port 48894 May 16 09:44:18.049522 sshd-session[5227]: pam_unix(sshd:session): session closed for user core May 16 09:44:18.052956 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:48894.service: Deactivated successfully. May 16 09:44:18.054888 systemd[1]: session-27.scope: Deactivated successfully. May 16 09:44:18.055735 systemd-logind[1512]: Session 27 logged out. Waiting for processes to exit. May 16 09:44:18.057131 systemd-logind[1512]: Removed session 27. May 16 09:44:19.203998 kubelet[2640]: E0516 09:44:19.203951 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"