May 13 10:02:36.824855 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 10:02:36.824875 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 08:41:27 -00 2025 May 13 10:02:36.824885 kernel: KASLR enabled May 13 10:02:36.824890 kernel: efi: EFI v2.7 by EDK II May 13 10:02:36.824896 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 10:02:36.824901 kernel: random: crng init done May 13 10:02:36.824908 kernel: secureboot: Secure boot disabled May 13 10:02:36.824914 kernel: ACPI: Early table checksum verification disabled May 13 10:02:36.824920 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 10:02:36.824927 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 10:02:36.824933 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824939 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824945 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824951 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824958 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824966 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824972 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824978 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824984 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 10:02:36.824990 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 10:02:36.824997 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 10:02:36.825003 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:02:36.825009 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 10:02:36.825015 kernel: Zone ranges: May 13 10:02:36.825021 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:02:36.825028 kernel: DMA32 empty May 13 10:02:36.825034 kernel: Normal empty May 13 10:02:36.825040 kernel: Device empty May 13 10:02:36.825046 kernel: Movable zone start for each node May 13 10:02:36.825052 kernel: Early memory node ranges May 13 10:02:36.825058 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 10:02:36.825065 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 10:02:36.825071 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 10:02:36.825077 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 10:02:36.825083 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 10:02:36.825089 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 10:02:36.825095 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 10:02:36.825102 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 10:02:36.825109 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 10:02:36.825115 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 10:02:36.825124 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 10:02:36.825130 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 10:02:36.825137 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 10:02:36.825145 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 10:02:36.825152 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 10:02:36.825158 kernel: psci: probing for conduit method from ACPI. May 13 10:02:36.825165 kernel: psci: PSCIv1.1 detected in firmware. May 13 10:02:36.825171 kernel: psci: Using standard PSCI v0.2 function IDs May 13 10:02:36.825177 kernel: psci: Trusted OS migration not required May 13 10:02:36.825184 kernel: psci: SMC Calling Convention v1.1 May 13 10:02:36.825190 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 10:02:36.825197 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 10:02:36.825204 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 10:02:36.825212 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 10:02:36.825218 kernel: Detected PIPT I-cache on CPU0 May 13 10:02:36.825225 kernel: CPU features: detected: GIC system register CPU interface May 13 10:02:36.825231 kernel: CPU features: detected: Spectre-v4 May 13 10:02:36.825238 kernel: CPU features: detected: Spectre-BHB May 13 10:02:36.825244 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 10:02:36.825251 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 10:02:36.825257 kernel: CPU features: detected: ARM erratum 1418040 May 13 10:02:36.825264 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 10:02:36.825270 kernel: alternatives: applying boot alternatives May 13 10:02:36.825278 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3651514edeb4393ddaa415275e0af422804924552258e142c279f217f1c9042 May 13 10:02:36.825286 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 10:02:36.825300 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 10:02:36.825307 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 10:02:36.825314 kernel: Fallback order for Node 0: 0 May 13 10:02:36.825320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 10:02:36.825327 kernel: Policy zone: DMA May 13 10:02:36.825333 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 10:02:36.825340 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 10:02:36.825348 kernel: software IO TLB: area num 4. May 13 10:02:36.825355 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 10:02:36.825362 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 10:02:36.825368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 10:02:36.825377 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 10:02:36.825384 kernel: rcu: RCU event tracing is enabled. May 13 10:02:36.825391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 10:02:36.825397 kernel: Trampoline variant of Tasks RCU enabled. May 13 10:02:36.825404 kernel: Tracing variant of Tasks RCU enabled. May 13 10:02:36.825410 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 10:02:36.825417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 10:02:36.825423 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:02:36.825430 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 10:02:36.825436 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 10:02:36.825442 kernel: GICv3: 256 SPIs implemented May 13 10:02:36.825450 kernel: GICv3: 0 Extended SPIs implemented May 13 10:02:36.825456 kernel: Root IRQ handler: gic_handle_irq May 13 10:02:36.825463 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 10:02:36.825469 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 10:02:36.825475 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 10:02:36.825481 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 10:02:36.825488 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 10:02:36.825495 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 10:02:36.825501 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 10:02:36.825507 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 10:02:36.825514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 10:02:36.825520 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:02:36.825528 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 10:02:36.825535 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 10:02:36.825541 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 10:02:36.825548 kernel: arm-pv: using stolen time PV May 13 10:02:36.825554 kernel: Console: colour dummy device 80x25 May 13 10:02:36.825561 kernel: ACPI: Core revision 20240827 May 13 10:02:36.825567 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 10:02:36.825574 kernel: pid_max: default: 32768 minimum: 301 May 13 10:02:36.825581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 10:02:36.825588 kernel: landlock: Up and running. May 13 10:02:36.825595 kernel: SELinux: Initializing. May 13 10:02:36.825602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:02:36.825608 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 10:02:36.825615 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 10:02:36.825622 kernel: rcu: Hierarchical SRCU implementation. May 13 10:02:36.825629 kernel: rcu: Max phase no-delay instances is 400. May 13 10:02:36.825635 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 10:02:36.825642 kernel: Remapping and enabling EFI services. May 13 10:02:36.825650 kernel: smp: Bringing up secondary CPUs ... May 13 10:02:36.825661 kernel: Detected PIPT I-cache on CPU1 May 13 10:02:36.825668 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 10:02:36.825677 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 10:02:36.825684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:02:36.825691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 10:02:36.825698 kernel: Detected PIPT I-cache on CPU2 May 13 10:02:36.825704 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 10:02:36.825711 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 10:02:36.825720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:02:36.825726 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 10:02:36.825733 kernel: Detected PIPT I-cache on CPU3 May 13 10:02:36.825740 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 10:02:36.825747 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 10:02:36.825754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 10:02:36.825761 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 10:02:36.825767 kernel: smp: Brought up 1 node, 4 CPUs May 13 10:02:36.825774 kernel: SMP: Total of 4 processors activated. May 13 10:02:36.825792 kernel: CPU: All CPU(s) started at EL1 May 13 10:02:36.825800 kernel: CPU features: detected: 32-bit EL0 Support May 13 10:02:36.825807 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 10:02:36.825814 kernel: CPU features: detected: Common not Private translations May 13 10:02:36.825821 kernel: CPU features: detected: CRC32 instructions May 13 10:02:36.825828 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 10:02:36.825835 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 10:02:36.825842 kernel: CPU features: detected: LSE atomic instructions May 13 10:02:36.825849 kernel: CPU features: detected: Privileged Access Never May 13 10:02:36.825858 kernel: CPU features: detected: RAS Extension Support May 13 10:02:36.825865 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 10:02:36.825872 kernel: alternatives: applying system-wide alternatives May 13 10:02:36.825879 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 10:02:36.825886 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 10:02:36.825893 kernel: devtmpfs: initialized May 13 10:02:36.825900 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 10:02:36.825907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 10:02:36.825914 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 10:02:36.825922 kernel: 0 pages in range for non-PLT usage May 13 10:02:36.825929 kernel: 508528 pages in range for PLT usage May 13 10:02:36.825936 kernel: pinctrl core: initialized pinctrl subsystem May 13 10:02:36.825943 kernel: SMBIOS 3.0.0 present. May 13 10:02:36.825950 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 10:02:36.825956 kernel: DMI: Memory slots populated: 1/1 May 13 10:02:36.825963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 10:02:36.825970 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 10:02:36.825977 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 10:02:36.825986 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 10:02:36.825992 kernel: audit: initializing netlink subsys (disabled) May 13 10:02:36.825999 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 May 13 10:02:36.826006 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 10:02:36.826013 kernel: cpuidle: using governor menu May 13 10:02:36.826020 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 10:02:36.826027 kernel: ASID allocator initialised with 32768 entries May 13 10:02:36.826034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 10:02:36.826041 kernel: Serial: AMBA PL011 UART driver May 13 10:02:36.826049 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 10:02:36.826056 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 10:02:36.826062 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 10:02:36.826069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 10:02:36.826076 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 10:02:36.826083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 10:02:36.826090 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 10:02:36.826101 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 10:02:36.826108 kernel: ACPI: Added _OSI(Module Device) May 13 10:02:36.826133 kernel: ACPI: Added _OSI(Processor Device) May 13 10:02:36.826142 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 10:02:36.826152 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 10:02:36.826159 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 10:02:36.826167 kernel: ACPI: Interpreter enabled May 13 10:02:36.826174 kernel: ACPI: Using GIC for interrupt routing May 13 10:02:36.826181 kernel: ACPI: MCFG table detected, 1 entries May 13 10:02:36.826187 kernel: ACPI: CPU0 has been hot-added May 13 10:02:36.826194 kernel: ACPI: CPU1 has been hot-added May 13 10:02:36.826202 kernel: ACPI: CPU2 has been hot-added May 13 10:02:36.826209 kernel: ACPI: CPU3 has been hot-added May 13 10:02:36.826216 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 10:02:36.826223 kernel: printk: legacy console [ttyAMA0] enabled May 13 10:02:36.826230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 10:02:36.826361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 10:02:36.826427 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 10:02:36.826487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 10:02:36.826550 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 10:02:36.826609 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 10:02:36.826638 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 10:02:36.826647 kernel: PCI host bridge to bus 0000:00 May 13 10:02:36.826714 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 10:02:36.826772 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 10:02:36.826849 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 10:02:36.826906 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 10:02:36.826980 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 10:02:36.827048 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 10:02:36.827109 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 10:02:36.827169 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 10:02:36.827228 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 10:02:36.827287 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 10:02:36.827360 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 10:02:36.827420 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 10:02:36.827474 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 10:02:36.827526 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 10:02:36.827580 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 10:02:36.827589 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 10:02:36.827596 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 10:02:36.827605 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 10:02:36.827612 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 10:02:36.827619 kernel: iommu: Default domain type: Translated May 13 10:02:36.827626 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 10:02:36.827633 kernel: efivars: Registered efivars operations May 13 10:02:36.827639 kernel: vgaarb: loaded May 13 10:02:36.827646 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 10:02:36.827653 kernel: VFS: Disk quotas dquot_6.6.0 May 13 10:02:36.827660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 10:02:36.827669 kernel: pnp: PnP ACPI init May 13 10:02:36.827741 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 10:02:36.827751 kernel: pnp: PnP ACPI: found 1 devices May 13 10:02:36.827758 kernel: NET: Registered PF_INET protocol family May 13 10:02:36.827764 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 10:02:36.827772 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 10:02:36.827798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 10:02:36.827818 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 10:02:36.827828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 10:02:36.827835 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 10:02:36.827842 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:02:36.827849 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 10:02:36.827856 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 10:02:36.827863 kernel: PCI: CLS 0 bytes, default 64 May 13 10:02:36.827870 kernel: kvm [1]: HYP mode not available May 13 10:02:36.827877 kernel: Initialise system trusted keyrings May 13 10:02:36.827884 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 10:02:36.827893 kernel: Key type asymmetric registered May 13 10:02:36.827900 kernel: Asymmetric key parser 'x509' registered May 13 10:02:36.827907 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 10:02:36.827914 kernel: io scheduler mq-deadline registered May 13 10:02:36.827921 kernel: io scheduler kyber registered May 13 10:02:36.827927 kernel: io scheduler bfq registered May 13 10:02:36.827934 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 10:02:36.827941 kernel: ACPI: button: Power Button [PWRB] May 13 10:02:36.827948 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 10:02:36.828021 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 10:02:36.828031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 10:02:36.828037 kernel: thunder_xcv, ver 1.0 May 13 10:02:36.828044 kernel: thunder_bgx, ver 1.0 May 13 10:02:36.828051 kernel: nicpf, ver 1.0 May 13 10:02:36.828058 kernel: nicvf, ver 1.0 May 13 10:02:36.828125 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 10:02:36.828182 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T10:02:36 UTC (1747130556) May 13 10:02:36.828193 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 10:02:36.828200 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 10:02:36.828207 kernel: watchdog: NMI not fully supported May 13 10:02:36.828214 kernel: watchdog: Hard watchdog permanently disabled May 13 10:02:36.828221 kernel: NET: Registered PF_INET6 protocol family May 13 10:02:36.828228 kernel: Segment Routing with IPv6 May 13 10:02:36.828235 kernel: In-situ OAM (IOAM) with IPv6 May 13 10:02:36.828241 kernel: NET: Registered PF_PACKET protocol family May 13 10:02:36.828248 kernel: Key type dns_resolver registered May 13 10:02:36.828256 kernel: registered taskstats version 1 May 13 10:02:36.828263 kernel: Loading compiled-in X.509 certificates May 13 10:02:36.828270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d18e2d911aaed50d8aae6c7998623d31780af195' May 13 10:02:36.828277 kernel: Demotion targets for Node 0: null May 13 10:02:36.828284 kernel: Key type .fscrypt registered May 13 10:02:36.828297 kernel: Key type fscrypt-provisioning registered May 13 10:02:36.828304 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 10:02:36.828311 kernel: ima: Allocated hash algorithm: sha1 May 13 10:02:36.828318 kernel: ima: No architecture policies found May 13 10:02:36.828327 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 10:02:36.828334 kernel: clk: Disabling unused clocks May 13 10:02:36.828341 kernel: PM: genpd: Disabling unused power domains May 13 10:02:36.828347 kernel: Warning: unable to open an initial console. May 13 10:02:36.828354 kernel: Freeing unused kernel memory: 39488K May 13 10:02:36.828361 kernel: Run /init as init process May 13 10:02:36.828368 kernel: with arguments: May 13 10:02:36.828375 kernel: /init May 13 10:02:36.828381 kernel: with environment: May 13 10:02:36.828389 kernel: HOME=/ May 13 10:02:36.828396 kernel: TERM=linux May 13 10:02:36.828403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 10:02:36.828410 systemd[1]: Successfully made /usr/ read-only. May 13 10:02:36.828420 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:02:36.828428 systemd[1]: Detected virtualization kvm. May 13 10:02:36.828435 systemd[1]: Detected architecture arm64. May 13 10:02:36.828442 systemd[1]: Running in initrd. May 13 10:02:36.828450 systemd[1]: No hostname configured, using default hostname. May 13 10:02:36.828458 systemd[1]: Hostname set to . May 13 10:02:36.828465 systemd[1]: Initializing machine ID from VM UUID. May 13 10:02:36.828472 systemd[1]: Queued start job for default target initrd.target. May 13 10:02:36.828480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:02:36.828487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:02:36.828495 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 10:02:36.828502 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:02:36.828511 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 10:02:36.828519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 10:02:36.828528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 10:02:36.828535 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 10:02:36.828543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:02:36.828550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:02:36.828559 systemd[1]: Reached target paths.target - Path Units. May 13 10:02:36.828566 systemd[1]: Reached target slices.target - Slice Units. May 13 10:02:36.828574 systemd[1]: Reached target swap.target - Swaps. May 13 10:02:36.828581 systemd[1]: Reached target timers.target - Timer Units. May 13 10:02:36.828588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:02:36.828595 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:02:36.828603 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 10:02:36.828610 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 10:02:36.828618 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:02:36.828626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:02:36.828634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:02:36.828641 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:02:36.828649 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 10:02:36.828656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:02:36.828663 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 10:02:36.828671 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 10:02:36.828679 systemd[1]: Starting systemd-fsck-usr.service... May 13 10:02:36.828687 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:02:36.828695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:02:36.828702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:02:36.828709 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 10:02:36.828717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:02:36.828726 systemd[1]: Finished systemd-fsck-usr.service. May 13 10:02:36.828733 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:02:36.828757 systemd-journald[245]: Collecting audit messages is disabled. May 13 10:02:36.828775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:36.828797 systemd-journald[245]: Journal started May 13 10:02:36.828816 systemd-journald[245]: Runtime Journal (/run/log/journal/207a4f34bb2e4e14a530100cb6bcec5c) is 6M, max 48.5M, 42.4M free. May 13 10:02:36.821246 systemd-modules-load[246]: Inserted module 'overlay' May 13 10:02:36.830642 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:02:36.835238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 10:02:36.839411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 10:02:36.838914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:02:36.843924 kernel: Bridge firewalling registered May 13 10:02:36.841874 systemd-modules-load[246]: Inserted module 'br_netfilter' May 13 10:02:36.842730 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:02:36.846170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:02:36.848671 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 10:02:36.848814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:02:36.858118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:02:36.860399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:02:36.864181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:02:36.865522 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:02:36.868476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:02:36.871353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 10:02:36.873608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:02:36.894499 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3651514edeb4393ddaa415275e0af422804924552258e142c279f217f1c9042 May 13 10:02:36.909519 systemd-resolved[290]: Positive Trust Anchors: May 13 10:02:36.909532 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:02:36.909565 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:02:36.914237 systemd-resolved[290]: Defaulting to hostname 'linux'. May 13 10:02:36.915110 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:02:36.918388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:02:36.964806 kernel: SCSI subsystem initialized May 13 10:02:36.968799 kernel: Loading iSCSI transport class v2.0-870. May 13 10:02:36.978808 kernel: iscsi: registered transport (tcp) May 13 10:02:36.990808 kernel: iscsi: registered transport (qla4xxx) May 13 10:02:36.990823 kernel: QLogic iSCSI HBA Driver May 13 10:02:37.006605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:02:37.023713 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:02:37.026240 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:02:37.067689 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 10:02:37.069849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 10:02:37.130824 kernel: raid6: neonx8 gen() 15769 MB/s May 13 10:02:37.147815 kernel: raid6: neonx4 gen() 15811 MB/s May 13 10:02:37.164817 kernel: raid6: neonx2 gen() 13157 MB/s May 13 10:02:37.181814 kernel: raid6: neonx1 gen() 10419 MB/s May 13 10:02:37.198811 kernel: raid6: int64x8 gen() 6893 MB/s May 13 10:02:37.215813 kernel: raid6: int64x4 gen() 7347 MB/s May 13 10:02:37.232813 kernel: raid6: int64x2 gen() 6093 MB/s May 13 10:02:37.249950 kernel: raid6: int64x1 gen() 5049 MB/s May 13 10:02:37.249975 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s May 13 10:02:37.267925 kernel: raid6: .... xor() 12374 MB/s, rmw enabled May 13 10:02:37.267947 kernel: raid6: using neon recovery algorithm May 13 10:02:37.277169 kernel: xor: measuring software checksum speed May 13 10:02:37.277187 kernel: 8regs : 21556 MB/sec May 13 10:02:37.277846 kernel: 32regs : 20924 MB/sec May 13 10:02:37.279122 kernel: arm64_neon : 27984 MB/sec May 13 10:02:37.279134 kernel: xor: using function: arm64_neon (27984 MB/sec) May 13 10:02:37.344818 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 10:02:37.353709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 10:02:37.356320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:02:37.388725 systemd-udevd[499]: Using default interface naming scheme 'v255'. May 13 10:02:37.392723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:02:37.394987 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 10:02:37.422773 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation May 13 10:02:37.444851 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:02:37.447014 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:02:37.506693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:02:37.510139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 10:02:37.559803 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 10:02:37.561795 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 10:02:37.561810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:02:37.561997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:37.568651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 10:02:37.568676 kernel: GPT:9289727 != 19775487 May 13 10:02:37.568685 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 10:02:37.568695 kernel: GPT:9289727 != 19775487 May 13 10:02:37.568704 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 10:02:37.568712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:02:37.567566 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:02:37.570959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:02:37.600842 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 10:02:37.603244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:37.609505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 10:02:37.617556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 10:02:37.628204 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 10:02:37.629421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 10:02:37.638682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:02:37.639920 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:02:37.641952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:02:37.644013 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:02:37.646583 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 10:02:37.648369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 10:02:37.668570 disk-uuid[593]: Primary Header is updated. May 13 10:02:37.668570 disk-uuid[593]: Secondary Entries is updated. May 13 10:02:37.668570 disk-uuid[593]: Secondary Header is updated. May 13 10:02:37.672146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 10:02:37.675810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:02:38.682814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 10:02:38.683586 disk-uuid[598]: The operation has completed successfully. May 13 10:02:38.707176 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 10:02:38.707277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 10:02:38.733720 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 10:02:38.762574 sh[613]: Success May 13 10:02:38.775847 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 10:02:38.775878 kernel: device-mapper: uevent: version 1.0.3 May 13 10:02:38.779808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 10:02:38.788610 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 10:02:38.812127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 10:02:38.814766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 10:02:38.830632 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 10:02:38.838002 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 10:02:38.838028 kernel: BTRFS: device fsid a7f3e58b-f7f0-457e-beaa-7636cc7d4568 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (625) May 13 10:02:38.839410 kernel: BTRFS info (device dm-0): first mount of filesystem a7f3e58b-f7f0-457e-beaa-7636cc7d4568 May 13 10:02:38.840498 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 10:02:38.841180 kernel: BTRFS info (device dm-0): using free-space-tree May 13 10:02:38.844572 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 10:02:38.845794 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 10:02:38.847207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 10:02:38.848014 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 10:02:38.849456 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 10:02:38.875937 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 13 10:02:38.875974 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:02:38.876934 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:02:38.877794 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:02:38.884822 kernel: BTRFS info (device vda6): last unmount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:02:38.885870 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 10:02:38.888712 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 10:02:38.953843 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:02:38.956808 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:02:38.997160 systemd-networkd[800]: lo: Link UP May 13 10:02:38.997171 systemd-networkd[800]: lo: Gained carrier May 13 10:02:38.997917 systemd-networkd[800]: Enumeration completed May 13 10:02:38.998377 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:02:38.998380 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:02:38.998985 systemd-networkd[800]: eth0: Link UP May 13 10:02:38.998988 systemd-networkd[800]: eth0: Gained carrier May 13 10:02:38.998996 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:02:38.999897 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:02:39.001030 systemd[1]: Reached target network.target - Network. May 13 10:02:39.020846 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:02:39.024398 ignition[699]: Ignition 2.21.0 May 13 10:02:39.024410 ignition[699]: Stage: fetch-offline May 13 10:02:39.024435 ignition[699]: no configs at "/usr/lib/ignition/base.d" May 13 10:02:39.024443 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:39.024616 ignition[699]: parsed url from cmdline: "" May 13 10:02:39.024618 ignition[699]: no config URL provided May 13 10:02:39.024623 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" May 13 10:02:39.024629 ignition[699]: no config at "/usr/lib/ignition/user.ign" May 13 10:02:39.024645 ignition[699]: op(1): [started] loading QEMU firmware config module May 13 10:02:39.024649 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 10:02:39.035698 ignition[699]: op(1): [finished] loading QEMU firmware config module May 13 10:02:39.035723 ignition[699]: QEMU firmware config was not found. Ignoring... May 13 10:02:39.073538 ignition[699]: parsing config with SHA512: cfcb9bb2f9e73c1a96c95e6180882b44b84cd59359f031f2903b38caf93fa89c5276e9054c1143ff4c927cf47eb860b09f9d5bf2eebff0254602a7bf4ba28103 May 13 10:02:39.079605 unknown[699]: fetched base config from "system" May 13 10:02:39.079617 unknown[699]: fetched user config from "qemu" May 13 10:02:39.080071 ignition[699]: fetch-offline: fetch-offline passed May 13 10:02:39.081363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:02:39.080131 ignition[699]: Ignition finished successfully May 13 10:02:39.083385 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 10:02:39.084228 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 10:02:39.118495 ignition[814]: Ignition 2.21.0 May 13 10:02:39.118510 ignition[814]: Stage: kargs May 13 10:02:39.118630 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 13 10:02:39.118638 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:39.119906 ignition[814]: kargs: kargs passed May 13 10:02:39.119961 ignition[814]: Ignition finished successfully May 13 10:02:39.123251 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 10:02:39.125324 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 10:02:39.145949 ignition[822]: Ignition 2.21.0 May 13 10:02:39.145961 ignition[822]: Stage: disks May 13 10:02:39.147163 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 13 10:02:39.147174 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:39.148332 ignition[822]: disks: disks passed May 13 10:02:39.148377 ignition[822]: Ignition finished successfully May 13 10:02:39.150776 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 10:02:39.151944 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 10:02:39.153702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 10:02:39.155757 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:02:39.157726 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:02:39.159457 systemd[1]: Reached target basic.target - Basic System. May 13 10:02:39.161968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 10:02:39.183605 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 10:02:39.188411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 10:02:39.190456 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 10:02:39.250811 kernel: EXT4-fs (vda9): mounted filesystem 70c9b161-a0a5-4b0a-87a4-ca4044b4e9ba r/w with ordered data mode. Quota mode: none. May 13 10:02:39.251604 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 10:02:39.252755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 10:02:39.255770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:02:39.257936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 10:02:39.258899 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 10:02:39.258939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 10:02:39.258960 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:02:39.271163 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 10:02:39.273409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 10:02:39.278940 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 13 10:02:39.278960 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:02:39.278971 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:02:39.278981 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:02:39.282071 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:02:39.314142 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 13 10:02:39.317821 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 13 10:02:39.321491 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 13 10:02:39.325436 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 13 10:02:39.395306 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 10:02:39.398245 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 10:02:39.399754 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 10:02:39.413818 kernel: BTRFS info (device vda6): last unmount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:02:39.428886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 10:02:39.441101 ignition[954]: INFO : Ignition 2.21.0 May 13 10:02:39.441101 ignition[954]: INFO : Stage: mount May 13 10:02:39.442687 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:02:39.442687 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:39.442687 ignition[954]: INFO : mount: mount passed May 13 10:02:39.442687 ignition[954]: INFO : Ignition finished successfully May 13 10:02:39.444097 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 10:02:39.446604 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 10:02:39.836713 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 10:02:39.838201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 10:02:39.867791 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (967) May 13 10:02:39.869925 kernel: BTRFS info (device vda6): first mount of filesystem 8aae84f1-2e43-4be0-9e92-8827170a573f May 13 10:02:39.869963 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 10:02:39.869984 kernel: BTRFS info (device vda6): using free-space-tree May 13 10:02:39.873309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 10:02:39.914457 ignition[984]: INFO : Ignition 2.21.0 May 13 10:02:39.914457 ignition[984]: INFO : Stage: files May 13 10:02:39.916112 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:02:39.916112 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:39.918403 ignition[984]: DEBUG : files: compiled without relabeling support, skipping May 13 10:02:39.919558 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 10:02:39.919558 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 10:02:39.922201 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 10:02:39.922201 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 10:02:39.922201 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 10:02:39.921656 unknown[984]: wrote ssh authorized keys file for user: core May 13 10:02:39.927591 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 10:02:39.927591 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 10:02:40.062518 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 10:02:40.346823 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:02:40.348881 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:02:40.362405 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 10:02:40.504917 systemd-networkd[800]: eth0: Gained IPv6LL May 13 10:02:40.681936 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 10:02:40.851577 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 10:02:40.851577 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 10:02:40.855309 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 10:02:40.872689 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:02:40.876443 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 10:02:40.877963 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 10:02:40.877963 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 10:02:40.877963 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 10:02:40.877963 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 10:02:40.877963 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 10:02:40.877963 ignition[984]: INFO : files: files passed May 13 10:02:40.877963 ignition[984]: INFO : Ignition finished successfully May 13 10:02:40.883446 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 10:02:40.885648 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 10:02:40.888524 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 10:02:40.899907 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory May 13 10:02:40.901473 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 10:02:40.901552 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 10:02:40.906511 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:02:40.906511 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 10:02:40.911466 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 10:02:40.908880 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:02:40.910445 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 10:02:40.913894 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 10:02:40.939591 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 10:02:40.939690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 10:02:40.941933 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 10:02:40.942922 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 10:02:40.944877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 10:02:40.945594 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 10:02:40.980585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:02:40.983263 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 10:02:41.003485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 10:02:41.004771 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:02:41.006883 systemd[1]: Stopped target timers.target - Timer Units. May 13 10:02:41.008653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 10:02:41.008774 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 10:02:41.011227 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 10:02:41.013154 systemd[1]: Stopped target basic.target - Basic System. May 13 10:02:41.014679 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 10:02:41.016348 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 10:02:41.018194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 10:02:41.020071 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 10:02:41.021867 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 10:02:41.023666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 10:02:41.025574 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 10:02:41.027477 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 10:02:41.029132 systemd[1]: Stopped target swap.target - Swaps. May 13 10:02:41.030507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 10:02:41.030622 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 10:02:41.032872 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 10:02:41.034749 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:02:41.036607 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 10:02:41.039814 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:02:41.040965 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 10:02:41.041071 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 10:02:41.043772 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 10:02:41.043906 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 10:02:41.045859 systemd[1]: Stopped target paths.target - Path Units. May 13 10:02:41.047425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 10:02:41.055720 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:02:41.057064 systemd[1]: Stopped target slices.target - Slice Units. May 13 10:02:41.059060 systemd[1]: Stopped target sockets.target - Socket Units. May 13 10:02:41.060580 systemd[1]: iscsid.socket: Deactivated successfully. May 13 10:02:41.060663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 10:02:41.062183 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 10:02:41.062257 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 10:02:41.063806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 10:02:41.063919 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 10:02:41.065674 systemd[1]: ignition-files.service: Deactivated successfully. May 13 10:02:41.065776 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 10:02:41.067969 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 10:02:41.070306 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 10:02:41.071479 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 10:02:41.071589 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:02:41.073354 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 10:02:41.073452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 10:02:41.079807 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 10:02:41.081866 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 10:02:41.086259 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 10:02:41.091663 ignition[1041]: INFO : Ignition 2.21.0 May 13 10:02:41.091663 ignition[1041]: INFO : Stage: umount May 13 10:02:41.093832 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 10:02:41.093832 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 10:02:41.096404 ignition[1041]: INFO : umount: umount passed May 13 10:02:41.097899 ignition[1041]: INFO : Ignition finished successfully May 13 10:02:41.099026 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 10:02:41.099136 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 10:02:41.100501 systemd[1]: Stopped target network.target - Network. May 13 10:02:41.101398 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 10:02:41.101462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 10:02:41.103126 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 10:02:41.103169 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 10:02:41.104685 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 10:02:41.104730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 10:02:41.106278 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 10:02:41.106319 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 10:02:41.108060 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 10:02:41.109617 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 10:02:41.113687 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 10:02:41.113813 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 10:02:41.116900 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 10:02:41.117145 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 10:02:41.117179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:02:41.120979 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 10:02:41.121196 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 10:02:41.121300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 10:02:41.124035 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 10:02:41.124391 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 10:02:41.125670 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 10:02:41.125704 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 10:02:41.128362 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 10:02:41.129526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 10:02:41.129577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 10:02:41.131909 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 10:02:41.131950 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 10:02:41.135716 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 10:02:41.135758 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 10:02:41.137418 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:02:41.143302 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 10:02:41.152332 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 10:02:41.152946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 10:02:41.155076 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 10:02:41.155211 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:02:41.157208 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 10:02:41.158812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 10:02:41.161019 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 10:02:41.161064 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 10:02:41.162458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 10:02:41.162485 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:02:41.164192 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 10:02:41.164235 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 10:02:41.166650 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 10:02:41.166694 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 10:02:41.169348 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 10:02:41.169392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 10:02:41.172090 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 10:02:41.172136 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 10:02:41.174486 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 10:02:41.175510 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 10:02:41.175559 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:02:41.178310 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 10:02:41.178348 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:02:41.181382 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 10:02:41.181420 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:02:41.184434 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 10:02:41.184471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:02:41.186467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 10:02:41.186509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:41.190062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 10:02:41.190158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 10:02:41.192180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 10:02:41.194314 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 10:02:41.215808 systemd[1]: Switching root. May 13 10:02:41.250290 systemd-journald[245]: Journal stopped May 13 10:02:42.013595 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). May 13 10:02:42.013651 kernel: SELinux: policy capability network_peer_controls=1 May 13 10:02:42.013662 kernel: SELinux: policy capability open_perms=1 May 13 10:02:42.013672 kernel: SELinux: policy capability extended_socket_class=1 May 13 10:02:42.013682 kernel: SELinux: policy capability always_check_network=0 May 13 10:02:42.013693 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 10:02:42.013702 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 10:02:42.013711 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 10:02:42.013721 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 10:02:42.013730 kernel: SELinux: policy capability userspace_initial_context=0 May 13 10:02:42.013739 kernel: audit: type=1403 audit(1747130561.414:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 10:02:42.013755 systemd[1]: Successfully loaded SELinux policy in 49.980ms. May 13 10:02:42.013774 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.459ms. May 13 10:02:42.013847 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 10:02:42.013860 systemd[1]: Detected virtualization kvm. May 13 10:02:42.013870 systemd[1]: Detected architecture arm64. May 13 10:02:42.013879 systemd[1]: Detected first boot. May 13 10:02:42.013889 systemd[1]: Initializing machine ID from VM UUID. May 13 10:02:42.013899 zram_generator::config[1088]: No configuration found. May 13 10:02:42.013911 kernel: NET: Registered PF_VSOCK protocol family May 13 10:02:42.013922 systemd[1]: Populated /etc with preset unit settings. May 13 10:02:42.013933 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 10:02:42.013943 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 10:02:42.013952 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 10:02:42.013962 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 10:02:42.013972 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 10:02:42.013985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 10:02:42.013995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 10:02:42.014006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 10:02:42.014017 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 10:02:42.014027 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 10:02:42.014036 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 10:02:42.014046 systemd[1]: Created slice user.slice - User and Session Slice. May 13 10:02:42.014056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 10:02:42.014066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 10:02:42.014076 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 10:02:42.014086 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 10:02:42.014097 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 10:02:42.014107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 10:02:42.014117 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 10:02:42.014127 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 10:02:42.014137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 10:02:42.014147 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 10:02:42.014157 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 10:02:42.014168 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 10:02:42.014178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 10:02:42.014188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 10:02:42.014198 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 10:02:42.014209 systemd[1]: Reached target slices.target - Slice Units. May 13 10:02:42.014220 systemd[1]: Reached target swap.target - Swaps. May 13 10:02:42.014229 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 10:02:42.014239 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 10:02:42.014249 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 10:02:42.014259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 10:02:42.014278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 10:02:42.014294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 10:02:42.014304 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 10:02:42.014315 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 10:02:42.014324 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 10:02:42.014334 systemd[1]: Mounting media.mount - External Media Directory... May 13 10:02:42.014344 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 10:02:42.014354 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 10:02:42.014364 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 10:02:42.014376 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 10:02:42.014386 systemd[1]: Reached target machines.target - Containers. May 13 10:02:42.014397 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 10:02:42.014406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:02:42.014416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 10:02:42.014426 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 10:02:42.014436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:02:42.014446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:02:42.014457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:02:42.014467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 10:02:42.014477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:02:42.014488 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 10:02:42.014499 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 10:02:42.014509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 10:02:42.014518 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 10:02:42.014528 systemd[1]: Stopped systemd-fsck-usr.service. May 13 10:02:42.014539 kernel: fuse: init (API version 7.41) May 13 10:02:42.014549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:02:42.014559 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 10:02:42.014569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 10:02:42.014578 kernel: loop: module loaded May 13 10:02:42.014587 kernel: ACPI: bus type drm_connector registered May 13 10:02:42.014597 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 10:02:42.014607 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 10:02:42.014617 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 10:02:42.014628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 10:02:42.014639 systemd[1]: verity-setup.service: Deactivated successfully. May 13 10:02:42.014648 systemd[1]: Stopped verity-setup.service. May 13 10:02:42.014681 systemd-journald[1160]: Collecting audit messages is disabled. May 13 10:02:42.014705 systemd-journald[1160]: Journal started May 13 10:02:42.014726 systemd-journald[1160]: Runtime Journal (/run/log/journal/207a4f34bb2e4e14a530100cb6bcec5c) is 6M, max 48.5M, 42.4M free. May 13 10:02:42.023871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 10:02:42.023909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 10:02:42.023923 systemd[1]: Mounted media.mount - External Media Directory. May 13 10:02:42.023941 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 10:02:42.023953 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 10:02:41.787375 systemd[1]: Queued start job for default target multi-user.target. May 13 10:02:41.812698 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 10:02:41.813077 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 10:02:42.027995 systemd[1]: Started systemd-journald.service - Journal Service. May 13 10:02:42.028618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 10:02:42.029887 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 10:02:42.032083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 10:02:42.033707 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 10:02:42.033913 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 10:02:42.035417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:02:42.035563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:02:42.036952 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:02:42.037112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:02:42.038507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:02:42.038672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:02:42.040103 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 10:02:42.040277 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 10:02:42.041548 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:02:42.041697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:02:42.043128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 10:02:42.044516 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 10:02:42.046026 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 10:02:42.047594 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 10:02:42.059505 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 10:02:42.063859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 10:02:42.077529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 10:02:42.078758 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 10:02:42.078806 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 10:02:42.080689 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 10:02:42.083040 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 10:02:42.084233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:02:42.085674 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 10:02:42.087619 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 10:02:42.088874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:02:42.090919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 10:02:42.092072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:02:42.092964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 10:02:42.099007 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 10:02:42.100313 systemd-journald[1160]: Time spent on flushing to /var/log/journal/207a4f34bb2e4e14a530100cb6bcec5c is 11.396ms for 885 entries. May 13 10:02:42.100313 systemd-journald[1160]: System Journal (/var/log/journal/207a4f34bb2e4e14a530100cb6bcec5c) is 8M, max 195.6M, 187.6M free. May 13 10:02:42.132939 systemd-journald[1160]: Received client request to flush runtime journal. May 13 10:02:42.132986 kernel: loop0: detected capacity change from 0 to 107312 May 13 10:02:42.133005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 10:02:42.103911 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 10:02:42.107097 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 10:02:42.108637 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 10:02:42.109971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 10:02:42.111652 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 10:02:42.114376 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 10:02:42.127049 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 10:02:42.128596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 10:02:42.135176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 10:02:42.150898 kernel: loop1: detected capacity change from 0 to 138376 May 13 10:02:42.151975 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. May 13 10:02:42.151992 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. May 13 10:02:42.154886 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 10:02:42.160958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 10:02:42.164379 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 10:02:42.179806 kernel: loop2: detected capacity change from 0 to 201592 May 13 10:02:42.196949 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 10:02:42.199838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 10:02:42.203806 kernel: loop3: detected capacity change from 0 to 107312 May 13 10:02:42.212796 kernel: loop4: detected capacity change from 0 to 138376 May 13 10:02:42.223837 kernel: loop5: detected capacity change from 0 to 201592 May 13 10:02:42.228341 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 13 10:02:42.228649 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 13 10:02:42.228912 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 10:02:42.229443 (sd-merge)[1227]: Merged extensions into '/usr'. May 13 10:02:42.234824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 10:02:42.236861 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... May 13 10:02:42.236876 systemd[1]: Reloading... May 13 10:02:42.292856 zram_generator::config[1251]: No configuration found. May 13 10:02:42.383317 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 10:02:42.389612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:42.452527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 10:02:42.452618 systemd[1]: Reloading finished in 215 ms. May 13 10:02:42.487365 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 10:02:42.488861 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 10:02:42.505187 systemd[1]: Starting ensure-sysext.service... May 13 10:02:42.507059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 10:02:42.523002 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 10:02:42.523377 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 10:02:42.523777 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 10:02:42.524080 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 10:02:42.524773 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 10:02:42.525089 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... May 13 10:02:42.525108 systemd[1]: Reloading... May 13 10:02:42.525261 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. May 13 10:02:42.525384 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. May 13 10:02:42.528151 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:02:42.528163 systemd-tmpfiles[1290]: Skipping /boot May 13 10:02:42.536885 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. May 13 10:02:42.536984 systemd-tmpfiles[1290]: Skipping /boot May 13 10:02:42.577250 zram_generator::config[1317]: No configuration found. May 13 10:02:42.646894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:02:42.709598 systemd[1]: Reloading finished in 184 ms. May 13 10:02:42.730008 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 10:02:42.737689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 10:02:42.748340 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:02:42.750723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 10:02:42.753271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 10:02:42.756769 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 10:02:42.760181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 10:02:42.763927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 10:02:42.772001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:02:42.776014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:02:42.780048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:02:42.782274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:02:42.783405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:02:42.783531 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:02:42.785311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 10:02:42.787524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:02:42.787706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:02:42.794995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 10:02:42.797130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:02:42.797360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:02:42.799242 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:02:42.799404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:02:42.806176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:02:42.809045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:02:42.811209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:02:42.814123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:02:42.815340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:02:42.815456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:02:42.817580 systemd-udevd[1363]: Using default interface naming scheme 'v255'. May 13 10:02:42.824390 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 10:02:42.829396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 10:02:42.834160 augenrules[1390]: No rules May 13 10:02:42.834562 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 10:02:42.837225 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:02:42.839169 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:02:42.840927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 10:02:42.843031 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 10:02:42.844581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:02:42.846452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:02:42.848061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:02:42.848210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:02:42.851861 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:02:42.852028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:02:42.856597 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 10:02:42.878819 systemd[1]: Finished ensure-sysext.service. May 13 10:02:42.891821 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:02:42.892904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 10:02:42.893881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 10:02:42.897030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 10:02:42.902772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 10:02:42.912074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 10:02:42.913257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 10:02:42.913314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 10:02:42.916463 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 10:02:42.923056 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 10:02:42.924330 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 10:02:42.933132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 10:02:42.933441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 10:02:42.939183 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 10:02:42.948016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 10:02:42.949437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 10:02:42.949591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 10:02:42.951063 augenrules[1434]: /sbin/augenrules: No change May 13 10:02:42.954644 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 10:02:42.954891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 10:02:42.961853 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 10:02:42.973038 augenrules[1469]: No rules May 13 10:02:42.975121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 10:02:42.975173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 10:02:42.978154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 10:02:42.981174 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:02:42.981403 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:02:42.988293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 10:02:43.018608 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 10:02:43.037307 systemd-resolved[1357]: Positive Trust Anchors: May 13 10:02:43.037323 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 10:02:43.037355 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 10:02:43.039613 systemd-networkd[1441]: lo: Link UP May 13 10:02:43.039620 systemd-networkd[1441]: lo: Gained carrier May 13 10:02:43.040467 systemd-networkd[1441]: Enumeration completed May 13 10:02:43.040570 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 10:02:43.045733 systemd-resolved[1357]: Defaulting to hostname 'linux'. May 13 10:02:43.046400 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:02:43.046411 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 10:02:43.046844 systemd-networkd[1441]: eth0: Link UP May 13 10:02:43.046943 systemd-networkd[1441]: eth0: Gained carrier May 13 10:02:43.046957 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 10:02:43.047246 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 10:02:43.049400 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 10:02:43.052882 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 10:02:43.054086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 10:02:43.055208 systemd[1]: Reached target network.target - Network. May 13 10:02:43.056067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 10:02:43.057198 systemd[1]: Reached target sysinit.target - System Initialization. May 13 10:02:43.058465 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 10:02:43.060847 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 10:02:43.060965 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 10:02:43.061319 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 13 10:02:43.062167 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 10:02:43.063346 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 10:02:43.063378 systemd[1]: Reached target paths.target - Path Units. May 13 10:02:43.064895 systemd[1]: Reached target time-set.target - System Time Set. May 13 10:02:43.066032 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 10:02:43.068025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 10:02:43.069284 systemd[1]: Reached target timers.target - Timer Units. May 13 10:02:43.071214 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 10:02:43.071276 systemd-timesyncd[1442]: Initial clock synchronization to Tue 2025-05-13 10:02:43.069390 UTC. May 13 10:02:43.071563 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 10:02:43.074632 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 10:02:43.079734 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 10:02:43.082729 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 10:02:43.084920 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 10:02:43.094616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 10:02:43.097197 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 10:02:43.099068 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 10:02:43.106792 systemd[1]: Reached target sockets.target - Socket Units. May 13 10:02:43.107712 systemd[1]: Reached target basic.target - Basic System. May 13 10:02:43.108698 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 10:02:43.108726 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 10:02:43.114682 systemd[1]: Starting containerd.service - containerd container runtime... May 13 10:02:43.117516 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 10:02:43.119322 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 10:02:43.122420 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 10:02:43.124904 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 10:02:43.125910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 10:02:43.126994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 10:02:43.129877 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 10:02:43.132894 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 10:02:43.135610 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 10:02:43.137720 jq[1502]: false May 13 10:02:43.149477 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 10:02:43.151939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 10:02:43.153343 extend-filesystems[1503]: Found loop3 May 13 10:02:43.156143 extend-filesystems[1503]: Found loop4 May 13 10:02:43.156143 extend-filesystems[1503]: Found loop5 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda May 13 10:02:43.156143 extend-filesystems[1503]: Found vda1 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda2 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda3 May 13 10:02:43.156143 extend-filesystems[1503]: Found usr May 13 10:02:43.156143 extend-filesystems[1503]: Found vda4 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda6 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda7 May 13 10:02:43.156143 extend-filesystems[1503]: Found vda9 May 13 10:02:43.156143 extend-filesystems[1503]: Checking size of /dev/vda9 May 13 10:02:43.155105 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 10:02:43.176947 extend-filesystems[1503]: Resized partition /dev/vda9 May 13 10:02:43.155607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 10:02:43.156977 systemd[1]: Starting update-engine.service - Update Engine... May 13 10:02:43.162090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 10:02:43.172385 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 10:02:43.177249 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 10:02:43.179989 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 10:02:43.183913 jq[1523]: true May 13 10:02:43.180596 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 10:02:43.180914 systemd[1]: motdgen.service: Deactivated successfully. May 13 10:02:43.181084 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 10:02:43.183323 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 10:02:43.183510 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 10:02:43.184961 extend-filesystems[1526]: resize2fs 1.47.2 (1-Jan-2025) May 13 10:02:43.197246 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 10:02:43.212999 tar[1528]: linux-arm64/LICENSE May 13 10:02:43.212999 tar[1528]: linux-arm64/helm May 13 10:02:43.213976 jq[1530]: true May 13 10:02:43.222219 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 10:02:43.231221 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) May 13 10:02:43.233949 systemd-logind[1515]: New seat seat0. May 13 10:02:43.234836 systemd[1]: Started systemd-logind.service - User Login Management. May 13 10:02:43.253152 dbus-daemon[1500]: [system] SELinux support is enabled May 13 10:02:43.253321 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 10:02:43.257808 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 10:02:43.264359 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 10:02:43.264389 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 10:02:43.265283 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 10:02:43.266340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 10:02:43.270695 update_engine[1520]: I20250513 10:02:43.269284 1520 main.cc:92] Flatcar Update Engine starting May 13 10:02:43.266355 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 10:02:43.267917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 10:02:43.272803 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 10:02:43.272803 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 10:02:43.272803 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 10:02:43.272661 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 10:02:43.280580 extend-filesystems[1503]: Resized filesystem in /dev/vda9 May 13 10:02:43.272893 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 10:02:43.282340 update_engine[1520]: I20250513 10:02:43.281919 1520 update_check_scheduler.cc:74] Next update check in 3m21s May 13 10:02:43.281854 systemd[1]: Started update-engine.service - Update Engine. May 13 10:02:43.287011 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 10:02:43.305070 bash[1565]: Updated "/home/core/.ssh/authorized_keys" May 13 10:02:43.307058 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 10:02:43.309212 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 10:02:43.348065 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 10:02:43.472223 containerd[1534]: time="2025-05-13T10:02:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 10:02:43.474938 containerd[1534]: time="2025-05-13T10:02:43.473963040Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 10:02:43.483600 containerd[1534]: time="2025-05-13T10:02:43.483555360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.12µs" May 13 10:02:43.483600 containerd[1534]: time="2025-05-13T10:02:43.483589960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 10:02:43.483600 containerd[1534]: time="2025-05-13T10:02:43.483608200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 10:02:43.483775 containerd[1534]: time="2025-05-13T10:02:43.483753760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 10:02:43.483813 containerd[1534]: time="2025-05-13T10:02:43.483774600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 10:02:43.483832 containerd[1534]: time="2025-05-13T10:02:43.483813480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:02:43.483891 containerd[1534]: time="2025-05-13T10:02:43.483870720Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 10:02:43.483891 containerd[1534]: time="2025-05-13T10:02:43.483886960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:02:43.484119 containerd[1534]: time="2025-05-13T10:02:43.484096320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 10:02:43.484119 containerd[1534]: time="2025-05-13T10:02:43.484115840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:02:43.484164 containerd[1534]: time="2025-05-13T10:02:43.484127520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 10:02:43.484164 containerd[1534]: time="2025-05-13T10:02:43.484135760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 10:02:43.484225 containerd[1534]: time="2025-05-13T10:02:43.484207680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 10:02:43.484423 containerd[1534]: time="2025-05-13T10:02:43.484402320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:02:43.484457 containerd[1534]: time="2025-05-13T10:02:43.484440200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 10:02:43.484477 containerd[1534]: time="2025-05-13T10:02:43.484455920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 10:02:43.484507 containerd[1534]: time="2025-05-13T10:02:43.484489000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 10:02:43.484711 containerd[1534]: time="2025-05-13T10:02:43.484692440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 10:02:43.484771 containerd[1534]: time="2025-05-13T10:02:43.484754000Z" level=info msg="metadata content store policy set" policy=shared May 13 10:02:43.488122 containerd[1534]: time="2025-05-13T10:02:43.488090480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 10:02:43.488188 containerd[1534]: time="2025-05-13T10:02:43.488140040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 10:02:43.488188 containerd[1534]: time="2025-05-13T10:02:43.488180920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 10:02:43.488223 containerd[1534]: time="2025-05-13T10:02:43.488194200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 10:02:43.488223 containerd[1534]: time="2025-05-13T10:02:43.488207840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 10:02:43.488256 containerd[1534]: time="2025-05-13T10:02:43.488230800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 10:02:43.488256 containerd[1534]: time="2025-05-13T10:02:43.488244120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 10:02:43.488299 containerd[1534]: time="2025-05-13T10:02:43.488255600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 10:02:43.488299 containerd[1534]: time="2025-05-13T10:02:43.488278880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 10:02:43.488299 containerd[1534]: time="2025-05-13T10:02:43.488291480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 10:02:43.488351 containerd[1534]: time="2025-05-13T10:02:43.488301080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 10:02:43.488351 containerd[1534]: time="2025-05-13T10:02:43.488313560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 10:02:43.488458 containerd[1534]: time="2025-05-13T10:02:43.488432920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 10:02:43.488482 containerd[1534]: time="2025-05-13T10:02:43.488465200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 10:02:43.488505 containerd[1534]: time="2025-05-13T10:02:43.488490920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 10:02:43.488505 containerd[1534]: time="2025-05-13T10:02:43.488502680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 10:02:43.488543 containerd[1534]: time="2025-05-13T10:02:43.488514440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 10:02:43.488543 containerd[1534]: time="2025-05-13T10:02:43.488524760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 10:02:43.488578 containerd[1534]: time="2025-05-13T10:02:43.488535680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 10:02:43.488578 containerd[1534]: time="2025-05-13T10:02:43.488557440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 10:02:43.488578 containerd[1534]: time="2025-05-13T10:02:43.488568200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 10:02:43.488629 containerd[1534]: time="2025-05-13T10:02:43.488578480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 10:02:43.488629 containerd[1534]: time="2025-05-13T10:02:43.488588680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 10:02:43.488955 containerd[1534]: time="2025-05-13T10:02:43.488936640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 10:02:43.488992 containerd[1534]: time="2025-05-13T10:02:43.488958120Z" level=info msg="Start snapshots syncer" May 13 10:02:43.488992 containerd[1534]: time="2025-05-13T10:02:43.488986280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 10:02:43.489799 containerd[1534]: time="2025-05-13T10:02:43.489186640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 10:02:43.489799 containerd[1534]: time="2025-05-13T10:02:43.489244640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489407480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489614520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489640640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489652880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489714840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489734200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489745800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489757280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489840080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489858120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 10:02:43.489903 containerd[1534]: time="2025-05-13T10:02:43.489870960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 10:02:43.491017 containerd[1534]: time="2025-05-13T10:02:43.490979960Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:02:43.491061 containerd[1534]: time="2025-05-13T10:02:43.491020840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 10:02:43.491061 containerd[1534]: time="2025-05-13T10:02:43.491037200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:02:43.491096 containerd[1534]: time="2025-05-13T10:02:43.491064320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 10:02:43.491096 containerd[1534]: time="2025-05-13T10:02:43.491074120Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 10:02:43.491142 containerd[1534]: time="2025-05-13T10:02:43.491114400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 10:02:43.491142 containerd[1534]: time="2025-05-13T10:02:43.491135080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 10:02:43.491179 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 10:02:43.491379 containerd[1534]: time="2025-05-13T10:02:43.491249920Z" level=info msg="runtime interface created" May 13 10:02:43.491379 containerd[1534]: time="2025-05-13T10:02:43.491266920Z" level=info msg="created NRI interface" May 13 10:02:43.491426 containerd[1534]: time="2025-05-13T10:02:43.491303840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 10:02:43.491463 containerd[1534]: time="2025-05-13T10:02:43.491443680Z" level=info msg="Connect containerd service" May 13 10:02:43.491506 containerd[1534]: time="2025-05-13T10:02:43.491486920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 10:02:43.493055 containerd[1534]: time="2025-05-13T10:02:43.492973840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 10:02:43.510201 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 10:02:43.513211 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 10:02:43.529450 systemd[1]: issuegen.service: Deactivated successfully. May 13 10:02:43.530863 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 10:02:43.534038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 10:02:43.550266 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 10:02:43.555115 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 10:02:43.557458 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 10:02:43.560099 systemd[1]: Reached target getty.target - Login Prompts. May 13 10:02:43.601353 containerd[1534]: time="2025-05-13T10:02:43.601289120Z" level=info msg="Start subscribing containerd event" May 13 10:02:43.601450 containerd[1534]: time="2025-05-13T10:02:43.601366720Z" level=info msg="Start recovering state" May 13 10:02:43.601520 containerd[1534]: time="2025-05-13T10:02:43.601464760Z" level=info msg="Start event monitor" May 13 10:02:43.601520 containerd[1534]: time="2025-05-13T10:02:43.601491040Z" level=info msg="Start cni network conf syncer for default" May 13 10:02:43.601520 containerd[1534]: time="2025-05-13T10:02:43.601504400Z" level=info msg="Start streaming server" May 13 10:02:43.601520 containerd[1534]: time="2025-05-13T10:02:43.601512840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 10:02:43.601520 containerd[1534]: time="2025-05-13T10:02:43.601519600Z" level=info msg="runtime interface starting up..." May 13 10:02:43.601625 containerd[1534]: time="2025-05-13T10:02:43.601525640Z" level=info msg="starting plugins..." May 13 10:02:43.601625 containerd[1534]: time="2025-05-13T10:02:43.601539720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 10:02:43.602024 containerd[1534]: time="2025-05-13T10:02:43.602001000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 10:02:43.602055 containerd[1534]: time="2025-05-13T10:02:43.602046120Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 10:02:43.602108 containerd[1534]: time="2025-05-13T10:02:43.602096440Z" level=info msg="containerd successfully booted in 0.130280s" May 13 10:02:43.602191 systemd[1]: Started containerd.service - containerd container runtime. May 13 10:02:43.663368 tar[1528]: linux-arm64/README.md May 13 10:02:43.680066 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 10:02:44.152894 systemd-networkd[1441]: eth0: Gained IPv6LL May 13 10:02:44.155503 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 10:02:44.159465 systemd[1]: Reached target network-online.target - Network is Online. May 13 10:02:44.161970 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 10:02:44.164119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:44.170370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 10:02:44.184609 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 10:02:44.184849 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 10:02:44.186769 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 10:02:44.193599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 10:02:44.675419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:44.677018 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 10:02:44.678596 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:44.681867 systemd[1]: Startup finished in 2.117s (kernel) + 4.785s (initrd) + 3.320s (userspace) = 10.224s. May 13 10:02:45.074539 kubelet[1635]: E0513 10:02:45.074351 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:45.076867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:45.077006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:45.077485 systemd[1]: kubelet.service: Consumed 775ms CPU time, 247M memory peak. May 13 10:02:50.016673 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 10:02:50.017893 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:57416.service - OpenSSH per-connection server daemon (10.0.0.1:57416). May 13 10:02:50.090775 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 57416 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.092848 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.099661 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 10:02:50.100668 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 10:02:50.106167 systemd-logind[1515]: New session 1 of user core. May 13 10:02:50.123819 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 10:02:50.126372 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 10:02:50.144709 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 10:02:50.146702 systemd-logind[1515]: New session c1 of user core. May 13 10:02:50.255656 systemd[1653]: Queued start job for default target default.target. May 13 10:02:50.276668 systemd[1653]: Created slice app.slice - User Application Slice. May 13 10:02:50.276697 systemd[1653]: Reached target paths.target - Paths. May 13 10:02:50.276735 systemd[1653]: Reached target timers.target - Timers. May 13 10:02:50.277941 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 10:02:50.287226 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 10:02:50.287283 systemd[1653]: Reached target sockets.target - Sockets. May 13 10:02:50.287330 systemd[1653]: Reached target basic.target - Basic System. May 13 10:02:50.287362 systemd[1653]: Reached target default.target - Main User Target. May 13 10:02:50.287388 systemd[1653]: Startup finished in 135ms. May 13 10:02:50.287461 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 10:02:50.289572 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 10:02:50.357955 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:57432.service - OpenSSH per-connection server daemon (10.0.0.1:57432). May 13 10:02:50.413161 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 57432 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.414369 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.418863 systemd-logind[1515]: New session 2 of user core. May 13 10:02:50.426930 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 10:02:50.477938 sshd[1666]: Connection closed by 10.0.0.1 port 57432 May 13 10:02:50.478359 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 13 10:02:50.488532 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:57432.service: Deactivated successfully. May 13 10:02:50.490548 systemd[1]: session-2.scope: Deactivated successfully. May 13 10:02:50.491362 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. May 13 10:02:50.493342 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:57440.service - OpenSSH per-connection server daemon (10.0.0.1:57440). May 13 10:02:50.494390 systemd-logind[1515]: Removed session 2. May 13 10:02:50.540596 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 57440 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.541738 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.546095 systemd-logind[1515]: New session 3 of user core. May 13 10:02:50.555926 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 10:02:50.603144 sshd[1674]: Connection closed by 10.0.0.1 port 57440 May 13 10:02:50.603431 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 13 10:02:50.613686 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:57440.service: Deactivated successfully. May 13 10:02:50.615953 systemd[1]: session-3.scope: Deactivated successfully. May 13 10:02:50.616648 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. May 13 10:02:50.618845 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:57446.service - OpenSSH per-connection server daemon (10.0.0.1:57446). May 13 10:02:50.619453 systemd-logind[1515]: Removed session 3. May 13 10:02:50.671986 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 57446 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.673048 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.677480 systemd-logind[1515]: New session 4 of user core. May 13 10:02:50.689904 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 10:02:50.742619 sshd[1682]: Connection closed by 10.0.0.1 port 57446 May 13 10:02:50.742976 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 13 10:02:50.753745 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:57446.service: Deactivated successfully. May 13 10:02:50.755287 systemd[1]: session-4.scope: Deactivated successfully. May 13 10:02:50.756542 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. May 13 10:02:50.758258 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:57456.service - OpenSSH per-connection server daemon (10.0.0.1:57456). May 13 10:02:50.759210 systemd-logind[1515]: Removed session 4. May 13 10:02:50.812873 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 57456 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.814211 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.817962 systemd-logind[1515]: New session 5 of user core. May 13 10:02:50.833919 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 10:02:50.895313 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 10:02:50.895575 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:50.908355 sudo[1691]: pam_unix(sudo:session): session closed for user root May 13 10:02:50.909945 sshd[1690]: Connection closed by 10.0.0.1 port 57456 May 13 10:02:50.910244 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 13 10:02:50.923543 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:57456.service: Deactivated successfully. May 13 10:02:50.926126 systemd[1]: session-5.scope: Deactivated successfully. May 13 10:02:50.926837 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. May 13 10:02:50.929006 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:57468.service - OpenSSH per-connection server daemon (10.0.0.1:57468). May 13 10:02:50.931160 systemd-logind[1515]: Removed session 5. May 13 10:02:50.978650 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 57468 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:50.979864 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:50.984391 systemd-logind[1515]: New session 6 of user core. May 13 10:02:50.993956 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 10:02:51.045110 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 10:02:51.045656 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:51.072033 sudo[1701]: pam_unix(sudo:session): session closed for user root May 13 10:02:51.077543 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 10:02:51.077834 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:51.085903 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 10:02:51.123647 augenrules[1723]: No rules May 13 10:02:51.124892 systemd[1]: audit-rules.service: Deactivated successfully. May 13 10:02:51.125132 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 10:02:51.127079 sudo[1700]: pam_unix(sudo:session): session closed for user root May 13 10:02:51.128800 sshd[1699]: Connection closed by 10.0.0.1 port 57468 May 13 10:02:51.128629 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 13 10:02:51.139696 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:57468.service: Deactivated successfully. May 13 10:02:51.141035 systemd[1]: session-6.scope: Deactivated successfully. May 13 10:02:51.142502 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. May 13 10:02:51.143813 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:57476.service - OpenSSH per-connection server daemon (10.0.0.1:57476). May 13 10:02:51.144541 systemd-logind[1515]: Removed session 6. May 13 10:02:51.198017 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 57476 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:02:51.199115 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:02:51.203661 systemd-logind[1515]: New session 7 of user core. May 13 10:02:51.213971 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 10:02:51.265076 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 10:02:51.265650 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 10:02:51.638837 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 10:02:51.651057 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 10:02:51.917496 dockerd[1756]: time="2025-05-13T10:02:51.917247315Z" level=info msg="Starting up" May 13 10:02:51.918199 dockerd[1756]: time="2025-05-13T10:02:51.918175330Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 10:02:51.956017 dockerd[1756]: time="2025-05-13T10:02:51.955971124Z" level=info msg="Loading containers: start." May 13 10:02:51.963814 kernel: Initializing XFRM netlink socket May 13 10:02:52.164351 systemd-networkd[1441]: docker0: Link UP May 13 10:02:52.167144 dockerd[1756]: time="2025-05-13T10:02:52.167105594Z" level=info msg="Loading containers: done." May 13 10:02:52.180303 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2179643985-merged.mount: Deactivated successfully. May 13 10:02:52.182165 dockerd[1756]: time="2025-05-13T10:02:52.182122369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 10:02:52.182229 dockerd[1756]: time="2025-05-13T10:02:52.182203443Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 10:02:52.182332 dockerd[1756]: time="2025-05-13T10:02:52.182305717Z" level=info msg="Initializing buildkit" May 13 10:02:52.203838 dockerd[1756]: time="2025-05-13T10:02:52.203792667Z" level=info msg="Completed buildkit initialization" May 13 10:02:52.208711 dockerd[1756]: time="2025-05-13T10:02:52.208678786Z" level=info msg="Daemon has completed initialization" May 13 10:02:52.208805 dockerd[1756]: time="2025-05-13T10:02:52.208754621Z" level=info msg="API listen on /run/docker.sock" May 13 10:02:52.208905 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 10:02:52.988024 containerd[1534]: time="2025-05-13T10:02:52.987985971Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 10:02:53.573447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723778232.mount: Deactivated successfully. May 13 10:02:54.720363 containerd[1534]: time="2025-05-13T10:02:54.720300169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:54.721197 containerd[1534]: time="2025-05-13T10:02:54.720941532Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 10:02:54.721858 containerd[1534]: time="2025-05-13T10:02:54.721824801Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:54.724789 containerd[1534]: time="2025-05-13T10:02:54.724748033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:54.726663 containerd[1534]: time="2025-05-13T10:02:54.726612485Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.738584918s" May 13 10:02:54.726700 containerd[1534]: time="2025-05-13T10:02:54.726667082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 10:02:54.727451 containerd[1534]: time="2025-05-13T10:02:54.727421119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 10:02:55.327379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 10:02:55.328772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:02:55.473425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:02:55.476928 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:02:55.512222 kubelet[2026]: E0513 10:02:55.512164 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:02:55.515018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:02:55.515160 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:02:55.515646 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.8M memory peak. May 13 10:02:56.245995 containerd[1534]: time="2025-05-13T10:02:56.245951296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:56.246898 containerd[1534]: time="2025-05-13T10:02:56.246689938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 10:02:56.247463 containerd[1534]: time="2025-05-13T10:02:56.247429741Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:56.250318 containerd[1534]: time="2025-05-13T10:02:56.250283316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:56.251301 containerd[1534]: time="2025-05-13T10:02:56.251271026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.523818749s" May 13 10:02:56.251392 containerd[1534]: time="2025-05-13T10:02:56.251375941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 10:02:56.251871 containerd[1534]: time="2025-05-13T10:02:56.251848197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 10:02:57.477018 containerd[1534]: time="2025-05-13T10:02:57.476941075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:57.477664 containerd[1534]: time="2025-05-13T10:02:57.477623643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 10:02:57.478262 containerd[1534]: time="2025-05-13T10:02:57.478233854Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:57.485024 containerd[1534]: time="2025-05-13T10:02:57.484985773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:57.485978 containerd[1534]: time="2025-05-13T10:02:57.485935288Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.234057692s" May 13 10:02:57.485978 containerd[1534]: time="2025-05-13T10:02:57.485972806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 10:02:57.486527 containerd[1534]: time="2025-05-13T10:02:57.486353628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 10:02:58.399853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955876796.mount: Deactivated successfully. May 13 10:02:58.756252 containerd[1534]: time="2025-05-13T10:02:58.756104316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:58.757447 containerd[1534]: time="2025-05-13T10:02:58.757408498Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 10:02:58.758214 containerd[1534]: time="2025-05-13T10:02:58.758181304Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:58.762098 containerd[1534]: time="2025-05-13T10:02:58.762061371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:58.763230 containerd[1534]: time="2025-05-13T10:02:58.763190800Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.276809494s" May 13 10:02:58.763230 containerd[1534]: time="2025-05-13T10:02:58.763222599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 10:02:58.763869 containerd[1534]: time="2025-05-13T10:02:58.763709897Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 10:02:59.317273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638602926.mount: Deactivated successfully. May 13 10:02:59.955102 containerd[1534]: time="2025-05-13T10:02:59.955036702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:59.956169 containerd[1534]: time="2025-05-13T10:02:59.956130936Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 10:02:59.957004 containerd[1534]: time="2025-05-13T10:02:59.956977901Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:59.960391 containerd[1534]: time="2025-05-13T10:02:59.960352160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:02:59.961344 containerd[1534]: time="2025-05-13T10:02:59.961308280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.197568384s" May 13 10:02:59.961344 containerd[1534]: time="2025-05-13T10:02:59.961339119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 10:02:59.961808 containerd[1534]: time="2025-05-13T10:02:59.961751221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 10:03:00.452327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717716342.mount: Deactivated successfully. May 13 10:03:00.457183 containerd[1534]: time="2025-05-13T10:03:00.457144102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:03:00.457666 containerd[1534]: time="2025-05-13T10:03:00.457635163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 10:03:00.458566 containerd[1534]: time="2025-05-13T10:03:00.458539367Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:03:00.460580 containerd[1534]: time="2025-05-13T10:03:00.460512450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 10:03:00.461112 containerd[1534]: time="2025-05-13T10:03:00.461081668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.254769ms" May 13 10:03:00.461282 containerd[1534]: time="2025-05-13T10:03:00.461196303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 10:03:00.461674 containerd[1534]: time="2025-05-13T10:03:00.461650006Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 10:03:00.966462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697470356.mount: Deactivated successfully. May 13 10:03:02.690388 containerd[1534]: time="2025-05-13T10:03:02.690337249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:02.856911 containerd[1534]: time="2025-05-13T10:03:02.856863918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 10:03:02.859615 containerd[1534]: time="2025-05-13T10:03:02.859560905Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:02.863411 containerd[1534]: time="2025-05-13T10:03:02.862693278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:02.863832 containerd[1534]: time="2025-05-13T10:03:02.863616006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.401936921s" May 13 10:03:02.863832 containerd[1534]: time="2025-05-13T10:03:02.863649605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 10:03:05.765516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 10:03:05.766947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:03:05.885255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:05.888183 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 10:03:05.923044 kubelet[2191]: E0513 10:03:05.922990 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 10:03:05.925376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 10:03:05.925509 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 10:03:05.926851 systemd[1]: kubelet.service: Consumed 125ms CPU time, 103M memory peak. May 13 10:03:10.089692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:10.089863 systemd[1]: kubelet.service: Consumed 125ms CPU time, 103M memory peak. May 13 10:03:10.091750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:03:10.112385 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... May 13 10:03:10.112399 systemd[1]: Reloading... May 13 10:03:10.182824 zram_generator::config[2249]: No configuration found. May 13 10:03:10.290068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:03:10.373024 systemd[1]: Reloading finished in 260 ms. May 13 10:03:10.436274 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 10:03:10.436348 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 10:03:10.436568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:10.436610 systemd[1]: kubelet.service: Consumed 81ms CPU time, 90.1M memory peak. May 13 10:03:10.439025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:03:10.549090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:10.552225 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:03:10.585671 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:03:10.585671 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 10:03:10.585671 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:03:10.586029 kubelet[2293]: I0513 10:03:10.585764 2293 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:03:11.511999 kubelet[2293]: I0513 10:03:11.511958 2293 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 10:03:11.511999 kubelet[2293]: I0513 10:03:11.511991 2293 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:03:11.512288 kubelet[2293]: I0513 10:03:11.512274 2293 server.go:954] "Client rotation is on, will bootstrap in background" May 13 10:03:11.591791 kubelet[2293]: I0513 10:03:11.591727 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:03:11.592439 kubelet[2293]: E0513 10:03:11.592310 2293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" May 13 10:03:11.616538 kubelet[2293]: I0513 10:03:11.616500 2293 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:03:11.620568 kubelet[2293]: I0513 10:03:11.620526 2293 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:03:11.620803 kubelet[2293]: I0513 10:03:11.620756 2293 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:03:11.620982 kubelet[2293]: I0513 10:03:11.620805 2293 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:03:11.621141 kubelet[2293]: I0513 10:03:11.621128 2293 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:03:11.621141 kubelet[2293]: I0513 10:03:11.621139 2293 container_manager_linux.go:304] "Creating device plugin manager" May 13 10:03:11.621457 kubelet[2293]: I0513 10:03:11.621435 2293 state_mem.go:36] "Initialized new in-memory state store" May 13 10:03:11.630752 kubelet[2293]: I0513 10:03:11.630724 2293 kubelet.go:446] "Attempting to sync node with API server" May 13 10:03:11.630892 kubelet[2293]: I0513 10:03:11.630757 2293 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:03:11.631964 kubelet[2293]: I0513 10:03:11.631934 2293 kubelet.go:352] "Adding apiserver pod source" May 13 10:03:11.631964 kubelet[2293]: I0513 10:03:11.631957 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:03:11.633005 kubelet[2293]: W0513 10:03:11.632955 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused May 13 10:03:11.633054 kubelet[2293]: E0513 10:03:11.633013 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" May 13 10:03:11.636589 kubelet[2293]: W0513 10:03:11.636544 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused May 13 10:03:11.636803 kubelet[2293]: E0513 10:03:11.636750 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" May 13 10:03:11.648458 kubelet[2293]: I0513 10:03:11.647479 2293 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:03:11.648458 kubelet[2293]: I0513 10:03:11.648207 2293 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:03:11.648458 kubelet[2293]: W0513 10:03:11.648414 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 10:03:11.649853 kubelet[2293]: I0513 10:03:11.649832 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 10:03:11.649949 kubelet[2293]: I0513 10:03:11.649939 2293 server.go:1287] "Started kubelet" May 13 10:03:11.652843 kubelet[2293]: I0513 10:03:11.652799 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:03:11.655187 kubelet[2293]: I0513 10:03:11.654838 2293 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:03:11.655187 kubelet[2293]: I0513 10:03:11.655138 2293 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:03:11.655801 kubelet[2293]: I0513 10:03:11.655303 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:03:11.655801 kubelet[2293]: I0513 10:03:11.655421 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 10:03:11.655801 kubelet[2293]: I0513 10:03:11.655467 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:03:11.656046 kubelet[2293]: I0513 10:03:11.656022 2293 server.go:490] "Adding debug handlers to kubelet server" May 13 10:03:11.656728 kubelet[2293]: E0513 10:03:11.654913 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f0e043382e2bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 10:03:11.649915579 +0000 UTC m=+1.094915831,LastTimestamp:2025-05-13 10:03:11.649915579 +0000 UTC m=+1.094915831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 10:03:11.657720 kubelet[2293]: E0513 10:03:11.657680 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" May 13 10:03:11.657904 kubelet[2293]: I0513 10:03:11.657878 2293 factory.go:221] Registration of the systemd container factory successfully May 13 10:03:11.657981 kubelet[2293]: I0513 10:03:11.657961 2293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:03:11.658061 kubelet[2293]: E0513 10:03:11.658013 2293 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:03:11.659443 kubelet[2293]: I0513 10:03:11.658739 2293 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:03:11.659443 kubelet[2293]: I0513 10:03:11.658818 2293 reconciler.go:26] "Reconciler: start to sync state" May 13 10:03:11.659443 kubelet[2293]: I0513 10:03:11.659227 2293 factory.go:221] Registration of the containerd container factory successfully May 13 10:03:11.659443 kubelet[2293]: W0513 10:03:11.659315 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused May 13 10:03:11.659443 kubelet[2293]: E0513 10:03:11.659355 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" May 13 10:03:11.660991 kubelet[2293]: E0513 10:03:11.660955 2293 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:03:11.668690 kubelet[2293]: I0513 10:03:11.668635 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 10:03:11.668690 kubelet[2293]: I0513 10:03:11.668655 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 10:03:11.668690 kubelet[2293]: I0513 10:03:11.668684 2293 state_mem.go:36] "Initialized new in-memory state store" May 13 10:03:11.673562 kubelet[2293]: I0513 10:03:11.673502 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:03:11.674668 kubelet[2293]: I0513 10:03:11.674634 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:03:11.674724 kubelet[2293]: I0513 10:03:11.674680 2293 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 10:03:11.674724 kubelet[2293]: I0513 10:03:11.674706 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 10:03:11.674724 kubelet[2293]: I0513 10:03:11.674712 2293 kubelet.go:2388] "Starting kubelet main sync loop" May 13 10:03:11.674806 kubelet[2293]: E0513 10:03:11.674756 2293 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:03:11.675482 kubelet[2293]: W0513 10:03:11.675382 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused May 13 10:03:11.675482 kubelet[2293]: E0513 10:03:11.675436 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" May 13 10:03:11.758588 kubelet[2293]: E0513 10:03:11.758547 2293 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:03:11.767001 kubelet[2293]: I0513 10:03:11.766927 2293 policy_none.go:49] "None policy: Start" May 13 10:03:11.767001 kubelet[2293]: I0513 10:03:11.766950 2293 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 10:03:11.767001 kubelet[2293]: I0513 10:03:11.766962 2293 state_mem.go:35] "Initializing new in-memory state store" May 13 10:03:11.772731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 10:03:11.775387 kubelet[2293]: E0513 10:03:11.775363 2293 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 10:03:11.782513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 10:03:11.785622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 10:03:11.807717 kubelet[2293]: I0513 10:03:11.807596 2293 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:03:11.808003 kubelet[2293]: I0513 10:03:11.807857 2293 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:03:11.808003 kubelet[2293]: I0513 10:03:11.807871 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:03:11.808161 kubelet[2293]: I0513 10:03:11.808105 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:03:11.809466 kubelet[2293]: E0513 10:03:11.809353 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 10:03:11.809466 kubelet[2293]: E0513 10:03:11.809405 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 10:03:11.859138 kubelet[2293]: E0513 10:03:11.859087 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" May 13 10:03:11.909684 kubelet[2293]: I0513 10:03:11.909320 2293 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:03:11.909771 kubelet[2293]: E0513 10:03:11.909737 2293 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" May 13 10:03:11.986392 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 10:03:12.008575 kubelet[2293]: E0513 10:03:12.008543 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:12.011329 systemd[1]: Created slice kubepods-burstable-pod7bc6744e0de982fb3da1e6036d17ee5a.slice - libcontainer container kubepods-burstable-pod7bc6744e0de982fb3da1e6036d17ee5a.slice. May 13 10:03:12.013670 kubelet[2293]: E0513 10:03:12.013640 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:12.016029 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 10:03:12.017668 kubelet[2293]: E0513 10:03:12.017594 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:12.062135 kubelet[2293]: I0513 10:03:12.062100 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:12.062135 kubelet[2293]: I0513 10:03:12.062140 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:12.062335 kubelet[2293]: I0513 10:03:12.062164 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:12.062335 kubelet[2293]: I0513 10:03:12.062181 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:12.062335 kubelet[2293]: I0513 10:03:12.062196 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:12.062335 kubelet[2293]: I0513 10:03:12.062220 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:12.062335 kubelet[2293]: I0513 10:03:12.062256 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:12.062475 kubelet[2293]: I0513 10:03:12.062304 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 10:03:12.062475 kubelet[2293]: I0513 10:03:12.062335 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:12.111610 kubelet[2293]: I0513 10:03:12.111576 2293 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:03:12.111907 kubelet[2293]: E0513 10:03:12.111882 2293 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" May 13 10:03:12.259503 kubelet[2293]: E0513 10:03:12.259467 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" May 13 10:03:12.310469 containerd[1534]: time="2025-05-13T10:03:12.310381797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 10:03:12.315204 containerd[1534]: time="2025-05-13T10:03:12.315003633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bc6744e0de982fb3da1e6036d17ee5a,Namespace:kube-system,Attempt:0,}" May 13 10:03:12.318517 containerd[1534]: time="2025-05-13T10:03:12.318488491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 10:03:12.332056 containerd[1534]: time="2025-05-13T10:03:12.332026406Z" level=info msg="connecting to shim da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3" address="unix:///run/containerd/s/7502ee30148a9d77a11fcc88566af671086a28c0b0c583ae07d3b252dd19ea55" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:12.342546 containerd[1534]: time="2025-05-13T10:03:12.342470538Z" level=info msg="connecting to shim 441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8" address="unix:///run/containerd/s/680bdd1d85705d722d02fb3d638bb1469d918eeeb7e3a8c28d1d8b411039f174" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:12.357541 containerd[1534]: time="2025-05-13T10:03:12.357275031Z" level=info msg="connecting to shim 75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b" address="unix:///run/containerd/s/ea4b4355d3c4fd4e8d2101fb42e03a39a0f42bc4163a75a1115f8b7e24179248" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:12.362954 systemd[1]: Started cri-containerd-da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3.scope - libcontainer container da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3. May 13 10:03:12.366637 systemd[1]: Started cri-containerd-441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8.scope - libcontainer container 441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8. May 13 10:03:12.390003 systemd[1]: Started cri-containerd-75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b.scope - libcontainer container 75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b. May 13 10:03:12.407846 containerd[1534]: time="2025-05-13T10:03:12.406137749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3\"" May 13 10:03:12.410772 containerd[1534]: time="2025-05-13T10:03:12.410726786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bc6744e0de982fb3da1e6036d17ee5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8\"" May 13 10:03:12.411306 containerd[1534]: time="2025-05-13T10:03:12.411280456Z" level=info msg="CreateContainer within sandbox \"da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 10:03:12.413269 containerd[1534]: time="2025-05-13T10:03:12.413216101Z" level=info msg="CreateContainer within sandbox \"441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 10:03:12.421577 containerd[1534]: time="2025-05-13T10:03:12.421545431Z" level=info msg="Container dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:12.422580 containerd[1534]: time="2025-05-13T10:03:12.422506013Z" level=info msg="Container 9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:12.430685 containerd[1534]: time="2025-05-13T10:03:12.430652506Z" level=info msg="CreateContainer within sandbox \"da5d8d1dc2d212afefa2601752a65dfa3948340322e90e595305a0f9f1a2a6a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493\"" May 13 10:03:12.431292 containerd[1534]: time="2025-05-13T10:03:12.431267615Z" level=info msg="StartContainer for \"dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493\"" May 13 10:03:12.432458 containerd[1534]: time="2025-05-13T10:03:12.432415074Z" level=info msg="connecting to shim dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493" address="unix:///run/containerd/s/7502ee30148a9d77a11fcc88566af671086a28c0b0c583ae07d3b252dd19ea55" protocol=ttrpc version=3 May 13 10:03:12.432627 containerd[1534]: time="2025-05-13T10:03:12.432595031Z" level=info msg="CreateContainer within sandbox \"441a28d5bad01f603b1dd4c51511718baaa8b2732ff66ad3a4749c0855d175f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43\"" May 13 10:03:12.433054 containerd[1534]: time="2025-05-13T10:03:12.433002424Z" level=info msg="StartContainer for \"9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43\"" May 13 10:03:12.434011 containerd[1534]: time="2025-05-13T10:03:12.433913407Z" level=info msg="connecting to shim 9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43" address="unix:///run/containerd/s/680bdd1d85705d722d02fb3d638bb1469d918eeeb7e3a8c28d1d8b411039f174" protocol=ttrpc version=3 May 13 10:03:12.437907 containerd[1534]: time="2025-05-13T10:03:12.437875256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b\"" May 13 10:03:12.439882 containerd[1534]: time="2025-05-13T10:03:12.439853820Z" level=info msg="CreateContainer within sandbox \"75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 10:03:12.448888 containerd[1534]: time="2025-05-13T10:03:12.448856858Z" level=info msg="Container a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:12.452980 systemd[1]: Started cri-containerd-dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493.scope - libcontainer container dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493. May 13 10:03:12.455714 containerd[1534]: time="2025-05-13T10:03:12.455564177Z" level=info msg="CreateContainer within sandbox \"75f27ca2377faf2c30dc46fc4ebc1337f01f32a489107e4cf8dbc1d873badf2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51\"" May 13 10:03:12.456286 systemd[1]: Started cri-containerd-9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43.scope - libcontainer container 9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43. May 13 10:03:12.457180 containerd[1534]: time="2025-05-13T10:03:12.457042470Z" level=info msg="StartContainer for \"a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51\"" May 13 10:03:12.458236 containerd[1534]: time="2025-05-13T10:03:12.458168450Z" level=info msg="connecting to shim a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51" address="unix:///run/containerd/s/ea4b4355d3c4fd4e8d2101fb42e03a39a0f42bc4163a75a1115f8b7e24179248" protocol=ttrpc version=3 May 13 10:03:12.485024 systemd[1]: Started cri-containerd-a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51.scope - libcontainer container a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51. May 13 10:03:12.508611 containerd[1534]: time="2025-05-13T10:03:12.505150642Z" level=info msg="StartContainer for \"9fc4997d2fa783ed612c959b5054f730cf1b674d09d12336758e7c2fe672fe43\" returns successfully" May 13 10:03:12.509196 containerd[1534]: time="2025-05-13T10:03:12.509166689Z" level=info msg="StartContainer for \"dfc286371857fa8cbfcc3e45f2227f97aa6109d34e4b90990269e00f1ba78493\" returns successfully" May 13 10:03:12.517270 kubelet[2293]: I0513 10:03:12.517177 2293 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:03:12.517652 kubelet[2293]: E0513 10:03:12.517619 2293 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" May 13 10:03:12.562818 containerd[1534]: time="2025-05-13T10:03:12.562655644Z" level=info msg="StartContainer for \"a93da039a88b21b98fadf07e57e0260961a75f34eb20fea0336347dadfabae51\" returns successfully" May 13 10:03:12.693025 kubelet[2293]: E0513 10:03:12.692986 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:12.695363 kubelet[2293]: E0513 10:03:12.695332 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:12.701011 kubelet[2293]: E0513 10:03:12.700983 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:13.319447 kubelet[2293]: I0513 10:03:13.319410 2293 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:03:13.700475 kubelet[2293]: E0513 10:03:13.700444 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:13.702953 kubelet[2293]: E0513 10:03:13.701026 2293 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 10:03:14.203657 kubelet[2293]: E0513 10:03:14.203586 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 10:03:14.285745 kubelet[2293]: I0513 10:03:14.285703 2293 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 10:03:14.357710 kubelet[2293]: I0513 10:03:14.357670 2293 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 10:03:14.367357 kubelet[2293]: E0513 10:03:14.367315 2293 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 10:03:14.367357 kubelet[2293]: I0513 10:03:14.367349 2293 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:03:14.369082 kubelet[2293]: E0513 10:03:14.369049 2293 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 10:03:14.369082 kubelet[2293]: I0513 10:03:14.369076 2293 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:03:14.371264 kubelet[2293]: E0513 10:03:14.371229 2293 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 10:03:14.634145 kubelet[2293]: I0513 10:03:14.634047 2293 apiserver.go:52] "Watching apiserver" May 13 10:03:14.659231 kubelet[2293]: I0513 10:03:14.659182 2293 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:03:16.008942 systemd[1]: Reload requested from client PID 2567 ('systemctl') (unit session-7.scope)... May 13 10:03:16.008958 systemd[1]: Reloading... May 13 10:03:16.085930 zram_generator::config[2610]: No configuration found. May 13 10:03:16.155157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 10:03:16.254812 systemd[1]: Reloading finished in 245 ms. May 13 10:03:16.283924 kubelet[2293]: I0513 10:03:16.282877 2293 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:03:16.283924 kubelet[2293]: E0513 10:03:16.282872 2293 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183f0e043382e2bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 10:03:11.649915579 +0000 UTC m=+1.094915831,LastTimestamp:2025-05-13 10:03:11.649915579 +0000 UTC m=+1.094915831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 10:03:16.283063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:03:16.296409 systemd[1]: kubelet.service: Deactivated successfully. May 13 10:03:16.296735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:16.296888 systemd[1]: kubelet.service: Consumed 1.523s CPU time, 124.2M memory peak. May 13 10:03:16.299143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 10:03:16.433365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 10:03:16.438933 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 10:03:16.479533 kubelet[2652]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:03:16.479533 kubelet[2652]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 10:03:16.479533 kubelet[2652]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 10:03:16.479883 kubelet[2652]: I0513 10:03:16.479552 2652 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 10:03:16.484919 kubelet[2652]: I0513 10:03:16.484891 2652 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 10:03:16.485020 kubelet[2652]: I0513 10:03:16.485009 2652 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 10:03:16.486647 kubelet[2652]: I0513 10:03:16.486620 2652 server.go:954] "Client rotation is on, will bootstrap in background" May 13 10:03:16.487987 kubelet[2652]: I0513 10:03:16.487970 2652 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 10:03:16.490668 kubelet[2652]: I0513 10:03:16.490645 2652 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 10:03:16.494817 kubelet[2652]: I0513 10:03:16.493900 2652 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 10:03:16.496388 kubelet[2652]: I0513 10:03:16.496363 2652 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 10:03:16.496570 kubelet[2652]: I0513 10:03:16.496535 2652 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 10:03:16.496754 kubelet[2652]: I0513 10:03:16.496567 2652 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 10:03:16.496845 kubelet[2652]: I0513 10:03:16.496759 2652 topology_manager.go:138] "Creating topology manager with none policy" May 13 10:03:16.496845 kubelet[2652]: I0513 10:03:16.496769 2652 container_manager_linux.go:304] "Creating device plugin manager" May 13 10:03:16.496845 kubelet[2652]: I0513 10:03:16.496832 2652 state_mem.go:36] "Initialized new in-memory state store" May 13 10:03:16.496983 kubelet[2652]: I0513 10:03:16.496961 2652 kubelet.go:446] "Attempting to sync node with API server" May 13 10:03:16.496983 kubelet[2652]: I0513 10:03:16.496976 2652 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 10:03:16.497026 kubelet[2652]: I0513 10:03:16.497005 2652 kubelet.go:352] "Adding apiserver pod source" May 13 10:03:16.497026 kubelet[2652]: I0513 10:03:16.497019 2652 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 10:03:16.499860 kubelet[2652]: I0513 10:03:16.499824 2652 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 10:03:16.500454 kubelet[2652]: I0513 10:03:16.500425 2652 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 10:03:16.502786 kubelet[2652]: I0513 10:03:16.500973 2652 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 10:03:16.502786 kubelet[2652]: I0513 10:03:16.501012 2652 server.go:1287] "Started kubelet" May 13 10:03:16.503591 kubelet[2652]: I0513 10:03:16.503480 2652 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 10:03:16.503807 kubelet[2652]: I0513 10:03:16.503771 2652 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 10:03:16.503882 kubelet[2652]: I0513 10:03:16.503860 2652 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 10:03:16.505044 kubelet[2652]: I0513 10:03:16.505008 2652 server.go:490] "Adding debug handlers to kubelet server" May 13 10:03:16.505816 kubelet[2652]: I0513 10:03:16.505445 2652 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 10:03:16.506795 kubelet[2652]: I0513 10:03:16.506164 2652 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 10:03:16.506795 kubelet[2652]: I0513 10:03:16.506604 2652 factory.go:221] Registration of the systemd container factory successfully May 13 10:03:16.506795 kubelet[2652]: I0513 10:03:16.506704 2652 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 10:03:16.507253 kubelet[2652]: I0513 10:03:16.507228 2652 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 10:03:16.508412 kubelet[2652]: E0513 10:03:16.507322 2652 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 10:03:16.509270 kubelet[2652]: I0513 10:03:16.509247 2652 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 10:03:16.509372 kubelet[2652]: I0513 10:03:16.509358 2652 reconciler.go:26] "Reconciler: start to sync state" May 13 10:03:16.510985 kubelet[2652]: E0513 10:03:16.510961 2652 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 10:03:16.513227 kubelet[2652]: I0513 10:03:16.513204 2652 factory.go:221] Registration of the containerd container factory successfully May 13 10:03:16.534596 kubelet[2652]: I0513 10:03:16.534456 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 10:03:16.535649 kubelet[2652]: I0513 10:03:16.535622 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 10:03:16.535649 kubelet[2652]: I0513 10:03:16.535650 2652 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 10:03:16.535757 kubelet[2652]: I0513 10:03:16.535670 2652 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 10:03:16.535757 kubelet[2652]: I0513 10:03:16.535677 2652 kubelet.go:2388] "Starting kubelet main sync loop" May 13 10:03:16.535757 kubelet[2652]: E0513 10:03:16.535714 2652 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 10:03:16.565242 kubelet[2652]: I0513 10:03:16.565211 2652 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 10:03:16.565242 kubelet[2652]: I0513 10:03:16.565233 2652 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 10:03:16.565242 kubelet[2652]: I0513 10:03:16.565255 2652 state_mem.go:36] "Initialized new in-memory state store" May 13 10:03:16.565412 kubelet[2652]: I0513 10:03:16.565405 2652 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 10:03:16.565435 kubelet[2652]: I0513 10:03:16.565416 2652 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 10:03:16.565435 kubelet[2652]: I0513 10:03:16.565434 2652 policy_none.go:49] "None policy: Start" May 13 10:03:16.565471 kubelet[2652]: I0513 10:03:16.565442 2652 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 10:03:16.565471 kubelet[2652]: I0513 10:03:16.565451 2652 state_mem.go:35] "Initializing new in-memory state store" May 13 10:03:16.565576 kubelet[2652]: I0513 10:03:16.565539 2652 state_mem.go:75] "Updated machine memory state" May 13 10:03:16.569561 kubelet[2652]: I0513 10:03:16.569514 2652 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 10:03:16.570259 kubelet[2652]: I0513 10:03:16.569947 2652 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 10:03:16.570259 kubelet[2652]: I0513 10:03:16.569965 2652 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 10:03:16.570259 kubelet[2652]: I0513 10:03:16.570199 2652 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 10:03:16.571116 kubelet[2652]: E0513 10:03:16.571040 2652 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 10:03:16.637103 kubelet[2652]: I0513 10:03:16.637045 2652 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:03:16.637239 kubelet[2652]: I0513 10:03:16.637060 2652 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:03:16.637262 kubelet[2652]: I0513 10:03:16.637064 2652 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 10:03:16.672668 kubelet[2652]: I0513 10:03:16.672610 2652 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 10:03:16.678265 kubelet[2652]: I0513 10:03:16.678127 2652 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 10:03:16.678265 kubelet[2652]: I0513 10:03:16.678198 2652 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 10:03:16.711248 kubelet[2652]: I0513 10:03:16.711145 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:16.711248 kubelet[2652]: I0513 10:03:16.711204 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:16.711415 kubelet[2652]: I0513 10:03:16.711278 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 10:03:16.711415 kubelet[2652]: I0513 10:03:16.711328 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:16.711415 kubelet[2652]: I0513 10:03:16.711359 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:16.711415 kubelet[2652]: I0513 10:03:16.711377 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:16.711415 kubelet[2652]: I0513 10:03:16.711392 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:16.711569 kubelet[2652]: I0513 10:03:16.711406 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc6744e0de982fb3da1e6036d17ee5a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bc6744e0de982fb3da1e6036d17ee5a\") " pod="kube-system/kube-apiserver-localhost" May 13 10:03:16.711569 kubelet[2652]: I0513 10:03:16.711443 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 10:03:17.497831 kubelet[2652]: I0513 10:03:17.497798 2652 apiserver.go:52] "Watching apiserver" May 13 10:03:17.510011 kubelet[2652]: I0513 10:03:17.509977 2652 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 10:03:17.550848 kubelet[2652]: I0513 10:03:17.550675 2652 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 10:03:17.551568 kubelet[2652]: I0513 10:03:17.551522 2652 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 10:03:17.555853 kubelet[2652]: E0513 10:03:17.555821 2652 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 10:03:17.556028 kubelet[2652]: E0513 10:03:17.556008 2652 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 10:03:17.582527 kubelet[2652]: I0513 10:03:17.582307 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5822879429999999 podStartE2EDuration="1.582287943s" podCreationTimestamp="2025-05-13 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:17.582181144 +0000 UTC m=+1.140046374" watchObservedRunningTime="2025-05-13 10:03:17.582287943 +0000 UTC m=+1.140153173" May 13 10:03:17.582527 kubelet[2652]: I0513 10:03:17.582451 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.58244626 podStartE2EDuration="1.58244626s" podCreationTimestamp="2025-05-13 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:17.573931212 +0000 UTC m=+1.131796482" watchObservedRunningTime="2025-05-13 10:03:17.58244626 +0000 UTC m=+1.140311490" May 13 10:03:17.624995 kubelet[2652]: I0513 10:03:17.624919 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.624901586 podStartE2EDuration="1.624901586s" podCreationTimestamp="2025-05-13 10:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:17.604111137 +0000 UTC m=+1.161976447" watchObservedRunningTime="2025-05-13 10:03:17.624901586 +0000 UTC m=+1.182766816" May 13 10:03:21.475590 sudo[1735]: pam_unix(sudo:session): session closed for user root May 13 10:03:21.480140 sshd[1734]: Connection closed by 10.0.0.1 port 57476 May 13 10:03:21.480177 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 13 10:03:21.483625 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:57476.service: Deactivated successfully. May 13 10:03:21.487074 systemd[1]: session-7.scope: Deactivated successfully. May 13 10:03:21.487308 systemd[1]: session-7.scope: Consumed 9.302s CPU time, 231.7M memory peak. May 13 10:03:21.488444 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. May 13 10:03:21.490082 systemd-logind[1515]: Removed session 7. May 13 10:03:21.822386 kubelet[2652]: I0513 10:03:21.822276 2652 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 10:03:21.823368 kubelet[2652]: I0513 10:03:21.822770 2652 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 10:03:21.823401 containerd[1534]: time="2025-05-13T10:03:21.822573626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 10:03:22.545413 kubelet[2652]: I0513 10:03:22.545311 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b1710fe-85cf-4797-82e8-21893ae7ae11-kube-proxy\") pod \"kube-proxy-r7dhs\" (UID: \"9b1710fe-85cf-4797-82e8-21893ae7ae11\") " pod="kube-system/kube-proxy-r7dhs" May 13 10:03:22.545413 kubelet[2652]: I0513 10:03:22.545350 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b1710fe-85cf-4797-82e8-21893ae7ae11-xtables-lock\") pod \"kube-proxy-r7dhs\" (UID: \"9b1710fe-85cf-4797-82e8-21893ae7ae11\") " pod="kube-system/kube-proxy-r7dhs" May 13 10:03:22.545413 kubelet[2652]: I0513 10:03:22.545369 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b1710fe-85cf-4797-82e8-21893ae7ae11-lib-modules\") pod \"kube-proxy-r7dhs\" (UID: \"9b1710fe-85cf-4797-82e8-21893ae7ae11\") " pod="kube-system/kube-proxy-r7dhs" May 13 10:03:22.545413 kubelet[2652]: I0513 10:03:22.545386 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7tx2\" (UniqueName: \"kubernetes.io/projected/9b1710fe-85cf-4797-82e8-21893ae7ae11-kube-api-access-v7tx2\") pod \"kube-proxy-r7dhs\" (UID: \"9b1710fe-85cf-4797-82e8-21893ae7ae11\") " pod="kube-system/kube-proxy-r7dhs" May 13 10:03:22.553528 systemd[1]: Created slice kubepods-besteffort-pod9b1710fe_85cf_4797_82e8_21893ae7ae11.slice - libcontainer container kubepods-besteffort-pod9b1710fe_85cf_4797_82e8_21893ae7ae11.slice. May 13 10:03:22.653450 kubelet[2652]: E0513 10:03:22.653416 2652 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 10:03:22.653608 kubelet[2652]: E0513 10:03:22.653595 2652 projected.go:194] Error preparing data for projected volume kube-api-access-v7tx2 for pod kube-system/kube-proxy-r7dhs: configmap "kube-root-ca.crt" not found May 13 10:03:22.653717 kubelet[2652]: E0513 10:03:22.653703 2652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b1710fe-85cf-4797-82e8-21893ae7ae11-kube-api-access-v7tx2 podName:9b1710fe-85cf-4797-82e8-21893ae7ae11 nodeName:}" failed. No retries permitted until 2025-05-13 10:03:23.153681247 +0000 UTC m=+6.711546477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7tx2" (UniqueName: "kubernetes.io/projected/9b1710fe-85cf-4797-82e8-21893ae7ae11-kube-api-access-v7tx2") pod "kube-proxy-r7dhs" (UID: "9b1710fe-85cf-4797-82e8-21893ae7ae11") : configmap "kube-root-ca.crt" not found May 13 10:03:22.918239 kubelet[2652]: I0513 10:03:22.918112 2652 status_manager.go:890] "Failed to get status for pod" podUID="8855fbe8-8234-4c67-bf16-a5600230f7b3" pod="tigera-operator/tigera-operator-789496d6f5-qmxhg" err="pods \"tigera-operator-789496d6f5-qmxhg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" May 13 10:03:22.926557 systemd[1]: Created slice kubepods-besteffort-pod8855fbe8_8234_4c67_bf16_a5600230f7b3.slice - libcontainer container kubepods-besteffort-pod8855fbe8_8234_4c67_bf16_a5600230f7b3.slice. May 13 10:03:22.947180 kubelet[2652]: I0513 10:03:22.947131 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hxk\" (UniqueName: \"kubernetes.io/projected/8855fbe8-8234-4c67-bf16-a5600230f7b3-kube-api-access-v5hxk\") pod \"tigera-operator-789496d6f5-qmxhg\" (UID: \"8855fbe8-8234-4c67-bf16-a5600230f7b3\") " pod="tigera-operator/tigera-operator-789496d6f5-qmxhg" May 13 10:03:22.947301 kubelet[2652]: I0513 10:03:22.947237 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8855fbe8-8234-4c67-bf16-a5600230f7b3-var-lib-calico\") pod \"tigera-operator-789496d6f5-qmxhg\" (UID: \"8855fbe8-8234-4c67-bf16-a5600230f7b3\") " pod="tigera-operator/tigera-operator-789496d6f5-qmxhg" May 13 10:03:23.232141 containerd[1534]: time="2025-05-13T10:03:23.232087309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-qmxhg,Uid:8855fbe8-8234-4c67-bf16-a5600230f7b3,Namespace:tigera-operator,Attempt:0,}" May 13 10:03:23.246927 containerd[1534]: time="2025-05-13T10:03:23.246886577Z" level=info msg="connecting to shim c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78" address="unix:///run/containerd/s/9162ada302e5eea26e4a07f6a43cd034b4b4828fb49c4027b466bd7389f8f8a4" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:23.282034 systemd[1]: Started cri-containerd-c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78.scope - libcontainer container c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78. May 13 10:03:23.311572 containerd[1534]: time="2025-05-13T10:03:23.311505564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-qmxhg,Uid:8855fbe8-8234-4c67-bf16-a5600230f7b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78\"" May 13 10:03:23.312966 containerd[1534]: time="2025-05-13T10:03:23.312932351Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 10:03:23.464518 containerd[1534]: time="2025-05-13T10:03:23.464470447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7dhs,Uid:9b1710fe-85cf-4797-82e8-21893ae7ae11,Namespace:kube-system,Attempt:0,}" May 13 10:03:23.482555 containerd[1534]: time="2025-05-13T10:03:23.482277569Z" level=info msg="connecting to shim 6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9" address="unix:///run/containerd/s/af714398c776a05cadda582b69d3dac64d27498107bfd07d86a3deac245d72a0" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:23.502930 systemd[1]: Started cri-containerd-6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9.scope - libcontainer container 6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9. May 13 10:03:23.530148 containerd[1534]: time="2025-05-13T10:03:23.530102704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7dhs,Uid:9b1710fe-85cf-4797-82e8-21893ae7ae11,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9\"" May 13 10:03:23.533666 containerd[1534]: time="2025-05-13T10:03:23.533625633Z" level=info msg="CreateContainer within sandbox \"6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 10:03:23.541856 containerd[1534]: time="2025-05-13T10:03:23.541706681Z" level=info msg="Container 34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:23.560632 containerd[1534]: time="2025-05-13T10:03:23.560587354Z" level=info msg="CreateContainer within sandbox \"6cb8c27e43b9c665a2645ef3bf008a4ffb2caf2ce31196179d2cee473366f5a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509\"" May 13 10:03:23.561832 containerd[1534]: time="2025-05-13T10:03:23.561800823Z" level=info msg="StartContainer for \"34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509\"" May 13 10:03:23.563524 containerd[1534]: time="2025-05-13T10:03:23.563425969Z" level=info msg="connecting to shim 34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509" address="unix:///run/containerd/s/af714398c776a05cadda582b69d3dac64d27498107bfd07d86a3deac245d72a0" protocol=ttrpc version=3 May 13 10:03:23.583063 systemd[1]: Started cri-containerd-34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509.scope - libcontainer container 34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509. May 13 10:03:23.626020 containerd[1534]: time="2025-05-13T10:03:23.625923254Z" level=info msg="StartContainer for \"34a0f90a534f2ee413ad0564970dde25c1fb661e0bb1e91396144550b6e0f509\" returns successfully" May 13 10:03:24.730543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661156241.mount: Deactivated successfully. May 13 10:03:25.121872 containerd[1534]: time="2025-05-13T10:03:25.121269669Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:25.122954 containerd[1534]: time="2025-05-13T10:03:25.122927096Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 10:03:25.123660 containerd[1534]: time="2025-05-13T10:03:25.123634490Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:25.126907 containerd[1534]: time="2025-05-13T10:03:25.126876665Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:25.127094 containerd[1534]: time="2025-05-13T10:03:25.126894225Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.813923794s" May 13 10:03:25.127145 containerd[1534]: time="2025-05-13T10:03:25.127093863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 10:03:25.131182 containerd[1534]: time="2025-05-13T10:03:25.131084672Z" level=info msg="CreateContainer within sandbox \"c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 10:03:25.137384 containerd[1534]: time="2025-05-13T10:03:25.136868987Z" level=info msg="Container 9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:25.139975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900585199.mount: Deactivated successfully. May 13 10:03:25.141698 containerd[1534]: time="2025-05-13T10:03:25.141593950Z" level=info msg="CreateContainer within sandbox \"c0e45f98807bc4223af19aaaa6fabfccbcf836a21f089060afd5b5c1fc982d78\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb\"" May 13 10:03:25.142916 containerd[1534]: time="2025-05-13T10:03:25.142884300Z" level=info msg="StartContainer for \"9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb\"" May 13 10:03:25.144599 containerd[1534]: time="2025-05-13T10:03:25.144542367Z" level=info msg="connecting to shim 9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb" address="unix:///run/containerd/s/9162ada302e5eea26e4a07f6a43cd034b4b4828fb49c4027b466bd7389f8f8a4" protocol=ttrpc version=3 May 13 10:03:25.164948 systemd[1]: Started cri-containerd-9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb.scope - libcontainer container 9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb. May 13 10:03:25.189601 containerd[1534]: time="2025-05-13T10:03:25.189476137Z" level=info msg="StartContainer for \"9b8dafef5a029741eed570e687e8ae1506223162ebda651ea46e71a8f04258cb\" returns successfully" May 13 10:03:25.582527 kubelet[2652]: I0513 10:03:25.582458 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r7dhs" podStartSLOduration=3.582440312 podStartE2EDuration="3.582440312s" podCreationTimestamp="2025-05-13 10:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:24.589361551 +0000 UTC m=+8.147226781" watchObservedRunningTime="2025-05-13 10:03:25.582440312 +0000 UTC m=+9.140305542" May 13 10:03:25.583272 kubelet[2652]: I0513 10:03:25.582560 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-qmxhg" podStartSLOduration=1.765453462 podStartE2EDuration="3.582555271s" podCreationTimestamp="2025-05-13 10:03:22 +0000 UTC" firstStartedPulling="2025-05-13 10:03:23.312604234 +0000 UTC m=+6.870469464" lastFinishedPulling="2025-05-13 10:03:25.129706043 +0000 UTC m=+8.687571273" observedRunningTime="2025-05-13 10:03:25.582309473 +0000 UTC m=+9.140174703" watchObservedRunningTime="2025-05-13 10:03:25.582555271 +0000 UTC m=+9.140420581" May 13 10:03:28.458508 update_engine[1520]: I20250513 10:03:28.458009 1520 update_attempter.cc:509] Updating boot flags... May 13 10:03:29.044702 systemd[1]: Created slice kubepods-besteffort-pod416fa278_015e_45d5_90cd_1c96784470ba.slice - libcontainer container kubepods-besteffort-pod416fa278_015e_45d5_90cd_1c96784470ba.slice. May 13 10:03:29.088630 kubelet[2652]: I0513 10:03:29.088583 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/416fa278-015e-45d5-90cd-1c96784470ba-typha-certs\") pod \"calico-typha-668769db98-2nh7f\" (UID: \"416fa278-015e-45d5-90cd-1c96784470ba\") " pod="calico-system/calico-typha-668769db98-2nh7f" May 13 10:03:29.088630 kubelet[2652]: I0513 10:03:29.088633 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416fa278-015e-45d5-90cd-1c96784470ba-tigera-ca-bundle\") pod \"calico-typha-668769db98-2nh7f\" (UID: \"416fa278-015e-45d5-90cd-1c96784470ba\") " pod="calico-system/calico-typha-668769db98-2nh7f" May 13 10:03:29.089680 kubelet[2652]: I0513 10:03:29.088653 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcp4\" (UniqueName: \"kubernetes.io/projected/416fa278-015e-45d5-90cd-1c96784470ba-kube-api-access-lzcp4\") pod \"calico-typha-668769db98-2nh7f\" (UID: \"416fa278-015e-45d5-90cd-1c96784470ba\") " pod="calico-system/calico-typha-668769db98-2nh7f" May 13 10:03:29.111218 systemd[1]: Created slice kubepods-besteffort-pod9b4fe65c_0af7_4f93_9d42_38ed0dd0b247.slice - libcontainer container kubepods-besteffort-pod9b4fe65c_0af7_4f93_9d42_38ed0dd0b247.slice. May 13 10:03:29.189993 kubelet[2652]: I0513 10:03:29.189612 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-node-certs\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.190387 kubelet[2652]: I0513 10:03:29.190275 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-cni-net-dir\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.191237 kubelet[2652]: I0513 10:03:29.191181 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-policysync\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192203 kubelet[2652]: I0513 10:03:29.191410 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-flexvol-driver-host\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192203 kubelet[2652]: I0513 10:03:29.191443 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-tigera-ca-bundle\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192203 kubelet[2652]: I0513 10:03:29.191461 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-var-run-calico\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192203 kubelet[2652]: I0513 10:03:29.191491 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-xtables-lock\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192203 kubelet[2652]: I0513 10:03:29.191518 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-cni-log-dir\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192403 kubelet[2652]: I0513 10:03:29.191534 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvz24\" (UniqueName: \"kubernetes.io/projected/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-kube-api-access-rvz24\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192403 kubelet[2652]: I0513 10:03:29.191553 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-var-lib-calico\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192403 kubelet[2652]: I0513 10:03:29.191567 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-cni-bin-dir\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.192403 kubelet[2652]: I0513 10:03:29.191582 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b4fe65c-0af7-4f93-9d42-38ed0dd0b247-lib-modules\") pod \"calico-node-4jm2j\" (UID: \"9b4fe65c-0af7-4f93-9d42-38ed0dd0b247\") " pod="calico-system/calico-node-4jm2j" May 13 10:03:29.217802 kubelet[2652]: E0513 10:03:29.217733 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:29.292436 kubelet[2652]: I0513 10:03:29.292206 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3d59e185-488f-4c52-86e4-340ff54919cf-socket-dir\") pod \"csi-node-driver-hqt2t\" (UID: \"3d59e185-488f-4c52-86e4-340ff54919cf\") " pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:29.293243 kubelet[2652]: I0513 10:03:29.292749 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d59e185-488f-4c52-86e4-340ff54919cf-registration-dir\") pod \"csi-node-driver-hqt2t\" (UID: \"3d59e185-488f-4c52-86e4-340ff54919cf\") " pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:29.293243 kubelet[2652]: I0513 10:03:29.292922 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d59e185-488f-4c52-86e4-340ff54919cf-kubelet-dir\") pod \"csi-node-driver-hqt2t\" (UID: \"3d59e185-488f-4c52-86e4-340ff54919cf\") " pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:29.293243 kubelet[2652]: I0513 10:03:29.292977 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3d59e185-488f-4c52-86e4-340ff54919cf-varrun\") pod \"csi-node-driver-hqt2t\" (UID: \"3d59e185-488f-4c52-86e4-340ff54919cf\") " pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:29.293243 kubelet[2652]: I0513 10:03:29.292997 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfj7m\" (UniqueName: \"kubernetes.io/projected/3d59e185-488f-4c52-86e4-340ff54919cf-kube-api-access-vfj7m\") pod \"csi-node-driver-hqt2t\" (UID: \"3d59e185-488f-4c52-86e4-340ff54919cf\") " pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:29.299882 kubelet[2652]: E0513 10:03:29.299803 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.299882 kubelet[2652]: W0513 10:03:29.299826 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.302093 kubelet[2652]: E0513 10:03:29.301893 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.312232 kubelet[2652]: E0513 10:03:29.312150 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.312232 kubelet[2652]: W0513 10:03:29.312173 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.312232 kubelet[2652]: E0513 10:03:29.312193 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.350942 containerd[1534]: time="2025-05-13T10:03:29.350835108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668769db98-2nh7f,Uid:416fa278-015e-45d5-90cd-1c96784470ba,Namespace:calico-system,Attempt:0,}" May 13 10:03:29.393775 kubelet[2652]: E0513 10:03:29.393723 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.394343 kubelet[2652]: W0513 10:03:29.393807 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.394343 kubelet[2652]: E0513 10:03:29.394248 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.394693 kubelet[2652]: E0513 10:03:29.394626 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.394693 kubelet[2652]: W0513 10:03:29.394640 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.394693 kubelet[2652]: E0513 10:03:29.394657 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.394974 kubelet[2652]: E0513 10:03:29.394955 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.394974 kubelet[2652]: W0513 10:03:29.394974 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.395098 kubelet[2652]: E0513 10:03:29.395038 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.395492 kubelet[2652]: E0513 10:03:29.395474 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.395492 kubelet[2652]: W0513 10:03:29.395487 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.395963 kubelet[2652]: E0513 10:03:29.395503 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.395963 kubelet[2652]: E0513 10:03:29.395709 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.395963 kubelet[2652]: W0513 10:03:29.395718 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.395963 kubelet[2652]: E0513 10:03:29.395727 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.395963 kubelet[2652]: E0513 10:03:29.395957 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.395963 kubelet[2652]: W0513 10:03:29.395965 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396028 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396088 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.396691 kubelet[2652]: W0513 10:03:29.396094 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396220 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.396691 kubelet[2652]: W0513 10:03:29.396226 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396234 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396392 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.396691 kubelet[2652]: W0513 10:03:29.396401 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396410 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.396691 kubelet[2652]: E0513 10:03:29.396458 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.398059 kubelet[2652]: E0513 10:03:29.396594 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.398059 kubelet[2652]: W0513 10:03:29.396620 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.398059 kubelet[2652]: E0513 10:03:29.396637 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.398059 kubelet[2652]: E0513 10:03:29.396839 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.398059 kubelet[2652]: W0513 10:03:29.396874 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.398059 kubelet[2652]: E0513 10:03:29.396893 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.398696 kubelet[2652]: E0513 10:03:29.398385 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.398696 kubelet[2652]: W0513 10:03:29.398407 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.398696 kubelet[2652]: E0513 10:03:29.398432 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.399088 kubelet[2652]: E0513 10:03:29.398835 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.399175 kubelet[2652]: W0513 10:03:29.399150 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.399298 kubelet[2652]: E0513 10:03:29.399284 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.399513 kubelet[2652]: E0513 10:03:29.399495 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.399580 kubelet[2652]: W0513 10:03:29.399569 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.399744 kubelet[2652]: E0513 10:03:29.399730 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.399913 kubelet[2652]: E0513 10:03:29.399893 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.399990 kubelet[2652]: W0513 10:03:29.399978 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.400141 kubelet[2652]: E0513 10:03:29.400091 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.400282 kubelet[2652]: E0513 10:03:29.400246 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.400282 kubelet[2652]: W0513 10:03:29.400258 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.400359 kubelet[2652]: E0513 10:03:29.400347 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.400498 kubelet[2652]: E0513 10:03:29.400484 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.400536 kubelet[2652]: W0513 10:03:29.400498 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.400536 kubelet[2652]: E0513 10:03:29.400514 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.400658 kubelet[2652]: E0513 10:03:29.400649 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.400706 kubelet[2652]: W0513 10:03:29.400662 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.400706 kubelet[2652]: E0513 10:03:29.400676 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.400854 kubelet[2652]: E0513 10:03:29.400835 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.400854 kubelet[2652]: W0513 10:03:29.400847 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.400854 kubelet[2652]: E0513 10:03:29.400861 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.401010 kubelet[2652]: E0513 10:03:29.400999 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.401010 kubelet[2652]: W0513 10:03:29.401009 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.401112 kubelet[2652]: E0513 10:03:29.401022 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.401148 kubelet[2652]: E0513 10:03:29.401137 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.401148 kubelet[2652]: W0513 10:03:29.401144 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.401192 kubelet[2652]: E0513 10:03:29.401158 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.401305 kubelet[2652]: E0513 10:03:29.401294 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.401305 kubelet[2652]: W0513 10:03:29.401304 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.401368 kubelet[2652]: E0513 10:03:29.401317 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.401899 kubelet[2652]: E0513 10:03:29.401883 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.401973 kubelet[2652]: W0513 10:03:29.401961 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.402258 kubelet[2652]: E0513 10:03:29.402242 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.403065 kubelet[2652]: E0513 10:03:29.402987 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.403065 kubelet[2652]: W0513 10:03:29.403002 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.403065 kubelet[2652]: E0513 10:03:29.403015 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.403934 kubelet[2652]: E0513 10:03:29.403888 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.404059 kubelet[2652]: W0513 10:03:29.404009 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.404105 kubelet[2652]: E0513 10:03:29.404029 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.414409 kubelet[2652]: E0513 10:03:29.414387 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.414409 kubelet[2652]: W0513 10:03:29.414403 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.414557 kubelet[2652]: E0513 10:03:29.414418 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.416550 containerd[1534]: time="2025-05-13T10:03:29.415969195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jm2j,Uid:9b4fe65c-0af7-4f93-9d42-38ed0dd0b247,Namespace:calico-system,Attempt:0,}" May 13 10:03:29.520871 containerd[1534]: time="2025-05-13T10:03:29.520263967Z" level=info msg="connecting to shim 7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2" address="unix:///run/containerd/s/6d4f0870678cb2786042535c925a01a64fd745bdc9460588ed81c9adc6b67239" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:29.520871 containerd[1534]: time="2025-05-13T10:03:29.520642245Z" level=info msg="connecting to shim 70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e" address="unix:///run/containerd/s/abd424d060709e298a666fb836bc673a35e6b60d5257a881a7fe0ded89f4acae" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:29.581954 systemd[1]: Started cri-containerd-70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e.scope - libcontainer container 70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e. May 13 10:03:29.583045 systemd[1]: Started cri-containerd-7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2.scope - libcontainer container 7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2. May 13 10:03:29.622385 containerd[1534]: time="2025-05-13T10:03:29.622111553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668769db98-2nh7f,Uid:416fa278-015e-45d5-90cd-1c96784470ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2\"" May 13 10:03:29.624296 containerd[1534]: time="2025-05-13T10:03:29.624258341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 10:03:29.634857 containerd[1534]: time="2025-05-13T10:03:29.634765957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jm2j,Uid:9b4fe65c-0af7-4f93-9d42-38ed0dd0b247,Namespace:calico-system,Attempt:0,} returns sandbox id \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\"" May 13 10:03:29.883543 kubelet[2652]: E0513 10:03:29.883456 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.884052 kubelet[2652]: W0513 10:03:29.883655 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.884052 kubelet[2652]: E0513 10:03:29.883931 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.884367 kubelet[2652]: E0513 10:03:29.884170 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.884367 kubelet[2652]: W0513 10:03:29.884182 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.884367 kubelet[2652]: E0513 10:03:29.884232 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.884601 kubelet[2652]: E0513 10:03:29.884585 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.884662 kubelet[2652]: W0513 10:03:29.884650 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.884732 kubelet[2652]: E0513 10:03:29.884719 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.891168 kubelet[2652]: E0513 10:03:29.891105 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.891168 kubelet[2652]: W0513 10:03:29.891127 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.891168 kubelet[2652]: E0513 10:03:29.891140 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.891317 kubelet[2652]: E0513 10:03:29.891305 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.891344 kubelet[2652]: W0513 10:03:29.891320 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.891344 kubelet[2652]: E0513 10:03:29.891330 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.891496 kubelet[2652]: E0513 10:03:29.891468 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.891496 kubelet[2652]: W0513 10:03:29.891496 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.891575 kubelet[2652]: E0513 10:03:29.891507 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.891743 kubelet[2652]: E0513 10:03:29.891655 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.891743 kubelet[2652]: W0513 10:03:29.891664 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.891743 kubelet[2652]: E0513 10:03:29.891672 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.891854 kubelet[2652]: E0513 10:03:29.891835 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.891854 kubelet[2652]: W0513 10:03:29.891843 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.891854 kubelet[2652]: E0513 10:03:29.891852 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.892031 kubelet[2652]: E0513 10:03:29.892002 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.892031 kubelet[2652]: W0513 10:03:29.892015 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.892031 kubelet[2652]: E0513 10:03:29.892024 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.892190 kubelet[2652]: E0513 10:03:29.892156 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.892190 kubelet[2652]: W0513 10:03:29.892169 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.892259 kubelet[2652]: E0513 10:03:29.892195 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.892422 kubelet[2652]: E0513 10:03:29.892400 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.892422 kubelet[2652]: W0513 10:03:29.892413 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.892470 kubelet[2652]: E0513 10:03:29.892422 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.892717 kubelet[2652]: E0513 10:03:29.892702 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.892717 kubelet[2652]: W0513 10:03:29.892715 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.892796 kubelet[2652]: E0513 10:03:29.892725 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893052 kubelet[2652]: E0513 10:03:29.893037 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893052 kubelet[2652]: W0513 10:03:29.893050 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893228 kubelet[2652]: E0513 10:03:29.893060 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893228 kubelet[2652]: E0513 10:03:29.893210 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893228 kubelet[2652]: W0513 10:03:29.893218 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893228 kubelet[2652]: E0513 10:03:29.893226 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893364 kubelet[2652]: E0513 10:03:29.893342 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893364 kubelet[2652]: W0513 10:03:29.893352 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893364 kubelet[2652]: E0513 10:03:29.893360 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893558 kubelet[2652]: E0513 10:03:29.893510 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893558 kubelet[2652]: W0513 10:03:29.893518 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893558 kubelet[2652]: E0513 10:03:29.893526 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893675 kubelet[2652]: E0513 10:03:29.893657 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893675 kubelet[2652]: W0513 10:03:29.893668 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893675 kubelet[2652]: E0513 10:03:29.893676 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893871 kubelet[2652]: E0513 10:03:29.893799 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893871 kubelet[2652]: W0513 10:03:29.893806 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893871 kubelet[2652]: E0513 10:03:29.893813 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.893939 kubelet[2652]: E0513 10:03:29.893930 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.893939 kubelet[2652]: W0513 10:03:29.893938 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.893980 kubelet[2652]: E0513 10:03:29.893945 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894077 kubelet[2652]: E0513 10:03:29.894057 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894077 kubelet[2652]: W0513 10:03:29.894068 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894077 kubelet[2652]: E0513 10:03:29.894075 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894214 kubelet[2652]: E0513 10:03:29.894197 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894214 kubelet[2652]: W0513 10:03:29.894207 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894214 kubelet[2652]: E0513 10:03:29.894216 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894339 kubelet[2652]: E0513 10:03:29.894333 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894361 kubelet[2652]: W0513 10:03:29.894340 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894361 kubelet[2652]: E0513 10:03:29.894347 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894468 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894818 kubelet[2652]: W0513 10:03:29.894477 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894484 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894601 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894818 kubelet[2652]: W0513 10:03:29.894609 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894617 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894750 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:29.894818 kubelet[2652]: W0513 10:03:29.894757 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:29.894818 kubelet[2652]: E0513 10:03:29.894765 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:30.536760 kubelet[2652]: E0513 10:03:30.536378 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:31.322595 containerd[1534]: time="2025-05-13T10:03:31.322539720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:31.323067 containerd[1534]: time="2025-05-13T10:03:31.323035998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 10:03:31.324159 containerd[1534]: time="2025-05-13T10:03:31.324107072Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:31.326274 containerd[1534]: time="2025-05-13T10:03:31.326230461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:31.326712 containerd[1534]: time="2025-05-13T10:03:31.326686498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.702396238s" May 13 10:03:31.326764 containerd[1534]: time="2025-05-13T10:03:31.326718698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 10:03:31.327688 containerd[1534]: time="2025-05-13T10:03:31.327660693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 10:03:31.342598 containerd[1534]: time="2025-05-13T10:03:31.342561454Z" level=info msg="CreateContainer within sandbox \"7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 10:03:31.350683 containerd[1534]: time="2025-05-13T10:03:31.350640212Z" level=info msg="Container aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:31.357997 containerd[1534]: time="2025-05-13T10:03:31.357953493Z" level=info msg="CreateContainer within sandbox \"7a0857dde4b95e8633fdf69b18c53d2ec79ff8c6fce07c449f127dd461089ac2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd\"" May 13 10:03:31.358446 containerd[1534]: time="2025-05-13T10:03:31.358413330Z" level=info msg="StartContainer for \"aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd\"" May 13 10:03:31.359473 containerd[1534]: time="2025-05-13T10:03:31.359447405Z" level=info msg="connecting to shim aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd" address="unix:///run/containerd/s/6d4f0870678cb2786042535c925a01a64fd745bdc9460588ed81c9adc6b67239" protocol=ttrpc version=3 May 13 10:03:31.379946 systemd[1]: Started cri-containerd-aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd.scope - libcontainer container aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd. May 13 10:03:31.420097 containerd[1534]: time="2025-05-13T10:03:31.419971604Z" level=info msg="StartContainer for \"aa7f14a4e07e2e6840c32b8c49bdb69ba23afcffa6a849ed36a8472f17b8d9bd\" returns successfully" May 13 10:03:31.618099 kubelet[2652]: I0513 10:03:31.617628 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-668769db98-2nh7f" podStartSLOduration=0.914176206 podStartE2EDuration="2.617610998s" podCreationTimestamp="2025-05-13 10:03:29 +0000 UTC" firstStartedPulling="2025-05-13 10:03:29.624013102 +0000 UTC m=+13.181878332" lastFinishedPulling="2025-05-13 10:03:31.327447894 +0000 UTC m=+14.885313124" observedRunningTime="2025-05-13 10:03:31.616190085 +0000 UTC m=+15.174055315" watchObservedRunningTime="2025-05-13 10:03:31.617610998 +0000 UTC m=+15.175476228" May 13 10:03:31.705913 kubelet[2652]: E0513 10:03:31.705878 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.705913 kubelet[2652]: W0513 10:03:31.705902 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706060 kubelet[2652]: E0513 10:03:31.705925 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706102 kubelet[2652]: E0513 10:03:31.706085 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706140 kubelet[2652]: W0513 10:03:31.706096 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706140 kubelet[2652]: E0513 10:03:31.706138 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706285 kubelet[2652]: E0513 10:03:31.706273 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706285 kubelet[2652]: W0513 10:03:31.706283 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706346 kubelet[2652]: E0513 10:03:31.706291 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706443 kubelet[2652]: E0513 10:03:31.706430 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706443 kubelet[2652]: W0513 10:03:31.706440 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706506 kubelet[2652]: E0513 10:03:31.706449 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706637 kubelet[2652]: E0513 10:03:31.706621 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706637 kubelet[2652]: W0513 10:03:31.706632 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706687 kubelet[2652]: E0513 10:03:31.706640 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706793 kubelet[2652]: E0513 10:03:31.706773 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706827 kubelet[2652]: W0513 10:03:31.706810 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706827 kubelet[2652]: E0513 10:03:31.706819 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.706950 kubelet[2652]: E0513 10:03:31.706939 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.706982 kubelet[2652]: W0513 10:03:31.706950 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.706982 kubelet[2652]: E0513 10:03:31.706957 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707085 kubelet[2652]: E0513 10:03:31.707072 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707085 kubelet[2652]: W0513 10:03:31.707082 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707145 kubelet[2652]: E0513 10:03:31.707090 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707224 kubelet[2652]: E0513 10:03:31.707213 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707224 kubelet[2652]: W0513 10:03:31.707223 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707285 kubelet[2652]: E0513 10:03:31.707230 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707367 kubelet[2652]: E0513 10:03:31.707348 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707367 kubelet[2652]: W0513 10:03:31.707364 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707429 kubelet[2652]: E0513 10:03:31.707373 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707496 kubelet[2652]: E0513 10:03:31.707485 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707496 kubelet[2652]: W0513 10:03:31.707494 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707551 kubelet[2652]: E0513 10:03:31.707502 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707621 kubelet[2652]: E0513 10:03:31.707610 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707650 kubelet[2652]: W0513 10:03:31.707622 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707650 kubelet[2652]: E0513 10:03:31.707629 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707764 kubelet[2652]: E0513 10:03:31.707752 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707764 kubelet[2652]: W0513 10:03:31.707762 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707836 kubelet[2652]: E0513 10:03:31.707769 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.707920 kubelet[2652]: E0513 10:03:31.707907 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.707920 kubelet[2652]: W0513 10:03:31.707917 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.707984 kubelet[2652]: E0513 10:03:31.707925 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.708052 kubelet[2652]: E0513 10:03:31.708041 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.708052 kubelet[2652]: W0513 10:03:31.708050 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.708106 kubelet[2652]: E0513 10:03:31.708058 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.718492 kubelet[2652]: E0513 10:03:31.718467 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.718492 kubelet[2652]: W0513 10:03:31.718486 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.718492 kubelet[2652]: E0513 10:03:31.718500 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.718764 kubelet[2652]: E0513 10:03:31.718745 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.718764 kubelet[2652]: W0513 10:03:31.718758 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.718890 kubelet[2652]: E0513 10:03:31.718771 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.718956 kubelet[2652]: E0513 10:03:31.718936 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.718956 kubelet[2652]: W0513 10:03:31.718953 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719009 kubelet[2652]: E0513 10:03:31.718971 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.719141 kubelet[2652]: E0513 10:03:31.719131 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.719141 kubelet[2652]: W0513 10:03:31.719140 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719201 kubelet[2652]: E0513 10:03:31.719153 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.719298 kubelet[2652]: E0513 10:03:31.719288 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.719298 kubelet[2652]: W0513 10:03:31.719297 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719345 kubelet[2652]: E0513 10:03:31.719305 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.719458 kubelet[2652]: E0513 10:03:31.719448 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.719458 kubelet[2652]: W0513 10:03:31.719457 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719522 kubelet[2652]: E0513 10:03:31.719470 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.719702 kubelet[2652]: E0513 10:03:31.719689 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.719734 kubelet[2652]: W0513 10:03:31.719702 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719734 kubelet[2652]: E0513 10:03:31.719721 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.719928 kubelet[2652]: E0513 10:03:31.719908 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.719928 kubelet[2652]: W0513 10:03:31.719921 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.719997 kubelet[2652]: E0513 10:03:31.719937 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.720099 kubelet[2652]: E0513 10:03:31.720088 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.720099 kubelet[2652]: W0513 10:03:31.720098 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.720155 kubelet[2652]: E0513 10:03:31.720110 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.720357 kubelet[2652]: E0513 10:03:31.720235 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.720357 kubelet[2652]: W0513 10:03:31.720243 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.720357 kubelet[2652]: E0513 10:03:31.720253 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.720537 kubelet[2652]: E0513 10:03:31.720520 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.720595 kubelet[2652]: W0513 10:03:31.720584 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.720658 kubelet[2652]: E0513 10:03:31.720646 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.720831 kubelet[2652]: E0513 10:03:31.720816 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.720831 kubelet[2652]: W0513 10:03:31.720827 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.720917 kubelet[2652]: E0513 10:03:31.720843 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.721024 kubelet[2652]: E0513 10:03:31.721012 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.721058 kubelet[2652]: W0513 10:03:31.721025 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.721058 kubelet[2652]: E0513 10:03:31.721038 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.721454 kubelet[2652]: E0513 10:03:31.721322 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.721454 kubelet[2652]: W0513 10:03:31.721336 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.721454 kubelet[2652]: E0513 10:03:31.721360 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.721669 kubelet[2652]: E0513 10:03:31.721655 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.721722 kubelet[2652]: W0513 10:03:31.721710 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.721798 kubelet[2652]: E0513 10:03:31.721774 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.721964 kubelet[2652]: E0513 10:03:31.721942 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.721964 kubelet[2652]: W0513 10:03:31.721960 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.722021 kubelet[2652]: E0513 10:03:31.721979 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.722184 kubelet[2652]: E0513 10:03:31.722171 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.722184 kubelet[2652]: W0513 10:03:31.722184 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.722233 kubelet[2652]: E0513 10:03:31.722194 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:31.722502 kubelet[2652]: E0513 10:03:31.722487 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:31.722502 kubelet[2652]: W0513 10:03:31.722501 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:31.722553 kubelet[2652]: E0513 10:03:31.722510 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.537038 kubelet[2652]: E0513 10:03:32.536981 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:32.608169 kubelet[2652]: I0513 10:03:32.608137 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:03:32.613514 kubelet[2652]: E0513 10:03:32.613490 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.613618 kubelet[2652]: W0513 10:03:32.613510 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.613618 kubelet[2652]: E0513 10:03:32.613560 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.613855 kubelet[2652]: E0513 10:03:32.613767 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.613855 kubelet[2652]: W0513 10:03:32.613812 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.613855 kubelet[2652]: E0513 10:03:32.613822 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.614019 kubelet[2652]: E0513 10:03:32.614005 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.614019 kubelet[2652]: W0513 10:03:32.614017 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.614079 kubelet[2652]: E0513 10:03:32.614025 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.614256 kubelet[2652]: E0513 10:03:32.614230 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.614256 kubelet[2652]: W0513 10:03:32.614241 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.614256 kubelet[2652]: E0513 10:03:32.614250 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.614418 kubelet[2652]: E0513 10:03:32.614404 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.614418 kubelet[2652]: W0513 10:03:32.614415 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.614995 kubelet[2652]: E0513 10:03:32.614423 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.614995 kubelet[2652]: E0513 10:03:32.614624 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.614995 kubelet[2652]: W0513 10:03:32.614632 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.614995 kubelet[2652]: E0513 10:03:32.614640 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.614995 kubelet[2652]: E0513 10:03:32.614855 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.614995 kubelet[2652]: W0513 10:03:32.614905 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.614995 kubelet[2652]: E0513 10:03:32.614918 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.615292 kubelet[2652]: E0513 10:03:32.615272 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.615292 kubelet[2652]: W0513 10:03:32.615283 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.615292 kubelet[2652]: E0513 10:03:32.615293 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.615482 kubelet[2652]: E0513 10:03:32.615468 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.615516 kubelet[2652]: W0513 10:03:32.615487 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.615546 kubelet[2652]: E0513 10:03:32.615517 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.615714 kubelet[2652]: E0513 10:03:32.615702 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.615714 kubelet[2652]: W0513 10:03:32.615713 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.615774 kubelet[2652]: E0513 10:03:32.615722 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.615991 kubelet[2652]: E0513 10:03:32.615955 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.615991 kubelet[2652]: W0513 10:03:32.615967 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.615991 kubelet[2652]: E0513 10:03:32.615976 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.616194 kubelet[2652]: E0513 10:03:32.616178 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.616194 kubelet[2652]: W0513 10:03:32.616189 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.616250 kubelet[2652]: E0513 10:03:32.616198 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.616743 kubelet[2652]: E0513 10:03:32.616396 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.616743 kubelet[2652]: W0513 10:03:32.616408 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.616743 kubelet[2652]: E0513 10:03:32.616417 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.616743 kubelet[2652]: E0513 10:03:32.616601 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.616743 kubelet[2652]: W0513 10:03:32.616609 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.616743 kubelet[2652]: E0513 10:03:32.616618 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.617050 kubelet[2652]: E0513 10:03:32.616799 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.617050 kubelet[2652]: W0513 10:03:32.616808 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.617050 kubelet[2652]: E0513 10:03:32.616816 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.627209 kubelet[2652]: E0513 10:03:32.627183 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.627209 kubelet[2652]: W0513 10:03:32.627206 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.627681 kubelet[2652]: E0513 10:03:32.627220 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.627681 kubelet[2652]: E0513 10:03:32.627414 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.627681 kubelet[2652]: W0513 10:03:32.627422 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.627681 kubelet[2652]: E0513 10:03:32.627436 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.627862 kubelet[2652]: E0513 10:03:32.627831 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.627884 kubelet[2652]: W0513 10:03:32.627863 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.627884 kubelet[2652]: E0513 10:03:32.627911 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.628157 kubelet[2652]: E0513 10:03:32.628140 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.628157 kubelet[2652]: W0513 10:03:32.628153 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.628238 kubelet[2652]: E0513 10:03:32.628172 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.628334 kubelet[2652]: E0513 10:03:32.628322 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.628334 kubelet[2652]: W0513 10:03:32.628333 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.628489 kubelet[2652]: E0513 10:03:32.628356 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.628513 kubelet[2652]: E0513 10:03:32.628497 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.628513 kubelet[2652]: W0513 10:03:32.628505 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.628558 kubelet[2652]: E0513 10:03:32.628517 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.629121 kubelet[2652]: E0513 10:03:32.628718 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.629121 kubelet[2652]: W0513 10:03:32.628732 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.629121 kubelet[2652]: E0513 10:03:32.628750 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.629121 kubelet[2652]: E0513 10:03:32.628930 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.629121 kubelet[2652]: W0513 10:03:32.628942 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.629121 kubelet[2652]: E0513 10:03:32.628956 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.629478 kubelet[2652]: E0513 10:03:32.629457 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.629478 kubelet[2652]: W0513 10:03:32.629474 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.629533 kubelet[2652]: E0513 10:03:32.629489 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.629727 kubelet[2652]: E0513 10:03:32.629712 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.629896 kubelet[2652]: W0513 10:03:32.629725 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.629896 kubelet[2652]: E0513 10:03:32.629813 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.630081 kubelet[2652]: E0513 10:03:32.629952 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.630081 kubelet[2652]: W0513 10:03:32.629959 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.630081 kubelet[2652]: E0513 10:03:32.629997 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.630638 kubelet[2652]: E0513 10:03:32.630621 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.630638 kubelet[2652]: W0513 10:03:32.630636 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.630745 kubelet[2652]: E0513 10:03:32.630659 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.630896 kubelet[2652]: E0513 10:03:32.630881 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.630896 kubelet[2652]: W0513 10:03:32.630894 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.630942 kubelet[2652]: E0513 10:03:32.630928 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.631266 kubelet[2652]: E0513 10:03:32.631252 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.631266 kubelet[2652]: W0513 10:03:32.631265 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.631338 kubelet[2652]: E0513 10:03:32.631280 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.631499 kubelet[2652]: E0513 10:03:32.631466 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.631499 kubelet[2652]: W0513 10:03:32.631498 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.631543 kubelet[2652]: E0513 10:03:32.631511 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.631732 kubelet[2652]: E0513 10:03:32.631719 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.631732 kubelet[2652]: W0513 10:03:32.631731 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.631794 kubelet[2652]: E0513 10:03:32.631744 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.632028 kubelet[2652]: E0513 10:03:32.632010 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.632028 kubelet[2652]: W0513 10:03:32.632026 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.632091 kubelet[2652]: E0513 10:03:32.632044 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:32.632217 kubelet[2652]: E0513 10:03:32.632199 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 10:03:32.632217 kubelet[2652]: W0513 10:03:32.632211 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 10:03:32.632217 kubelet[2652]: E0513 10:03:32.632219 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 10:03:33.152116 containerd[1534]: time="2025-05-13T10:03:33.152056661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:33.153803 containerd[1534]: time="2025-05-13T10:03:33.153617054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 10:03:33.155298 containerd[1534]: time="2025-05-13T10:03:33.155268406Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:33.158242 containerd[1534]: time="2025-05-13T10:03:33.158206952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:33.158928 containerd[1534]: time="2025-05-13T10:03:33.158730550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.831035697s" May 13 10:03:33.158928 containerd[1534]: time="2025-05-13T10:03:33.158839430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 10:03:33.162007 containerd[1534]: time="2025-05-13T10:03:33.161959895Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 10:03:33.179586 containerd[1534]: time="2025-05-13T10:03:33.178557578Z" level=info msg="Container 6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:33.188419 containerd[1534]: time="2025-05-13T10:03:33.188366492Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\"" May 13 10:03:33.190232 containerd[1534]: time="2025-05-13T10:03:33.190204284Z" level=info msg="StartContainer for \"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\"" May 13 10:03:33.194268 containerd[1534]: time="2025-05-13T10:03:33.194231585Z" level=info msg="connecting to shim 6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684" address="unix:///run/containerd/s/abd424d060709e298a666fb836bc673a35e6b60d5257a881a7fe0ded89f4acae" protocol=ttrpc version=3 May 13 10:03:33.222942 systemd[1]: Started cri-containerd-6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684.scope - libcontainer container 6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684. May 13 10:03:33.258403 containerd[1534]: time="2025-05-13T10:03:33.258364646Z" level=info msg="StartContainer for \"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\" returns successfully" May 13 10:03:33.288135 systemd[1]: cri-containerd-6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684.scope: Deactivated successfully. May 13 10:03:33.288428 systemd[1]: cri-containerd-6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684.scope: Consumed 43ms CPU time, 8M memory peak, 6.2M written to disk. May 13 10:03:33.316863 containerd[1534]: time="2025-05-13T10:03:33.316806614Z" level=info msg="received exit event container_id:\"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\" id:\"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\" pid:3349 exited_at:{seconds:1747130613 nanos:306657062}" May 13 10:03:33.319155 containerd[1534]: time="2025-05-13T10:03:33.319121844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\" id:\"6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684\" pid:3349 exited_at:{seconds:1747130613 nanos:306657062}" May 13 10:03:33.351161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d648e0027b3d7a0a7e97591e93d859f816f7f23c00c35e7c0b103fc96d02684-rootfs.mount: Deactivated successfully. May 13 10:03:33.613199 containerd[1534]: time="2025-05-13T10:03:33.613153755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 10:03:34.537457 kubelet[2652]: E0513 10:03:34.537068 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:36.536422 kubelet[2652]: E0513 10:03:36.536369 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:38.537518 kubelet[2652]: E0513 10:03:38.537357 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:38.704391 containerd[1534]: time="2025-05-13T10:03:38.704340857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:38.705117 containerd[1534]: time="2025-05-13T10:03:38.705088655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 10:03:38.705519 containerd[1534]: time="2025-05-13T10:03:38.705487173Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:38.707473 containerd[1534]: time="2025-05-13T10:03:38.707445887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:38.708319 containerd[1534]: time="2025-05-13T10:03:38.707984765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 5.09477269s" May 13 10:03:38.708319 containerd[1534]: time="2025-05-13T10:03:38.708013405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 10:03:38.710690 containerd[1534]: time="2025-05-13T10:03:38.710653436Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 10:03:38.717496 containerd[1534]: time="2025-05-13T10:03:38.717320413Z" level=info msg="Container d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:38.728432 containerd[1534]: time="2025-05-13T10:03:38.728389496Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\"" May 13 10:03:38.728865 containerd[1534]: time="2025-05-13T10:03:38.728806415Z" level=info msg="StartContainer for \"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\"" May 13 10:03:38.730136 containerd[1534]: time="2025-05-13T10:03:38.730100690Z" level=info msg="connecting to shim d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8" address="unix:///run/containerd/s/abd424d060709e298a666fb836bc673a35e6b60d5257a881a7fe0ded89f4acae" protocol=ttrpc version=3 May 13 10:03:38.756934 systemd[1]: Started cri-containerd-d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8.scope - libcontainer container d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8. May 13 10:03:38.796142 containerd[1534]: time="2025-05-13T10:03:38.794467513Z" level=info msg="StartContainer for \"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\" returns successfully" May 13 10:03:39.335448 systemd[1]: cri-containerd-d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8.scope: Deactivated successfully. May 13 10:03:39.335938 systemd[1]: cri-containerd-d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8.scope: Consumed 441ms CPU time, 160.5M memory peak, 4K read from disk, 150.3M written to disk. May 13 10:03:39.354391 containerd[1534]: time="2025-05-13T10:03:39.354348381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\" id:\"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\" pid:3410 exited_at:{seconds:1747130619 nanos:353998462}" May 13 10:03:39.360663 containerd[1534]: time="2025-05-13T10:03:39.360617521Z" level=info msg="received exit event container_id:\"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\" id:\"d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8\" pid:3410 exited_at:{seconds:1747130619 nanos:353998462}" May 13 10:03:39.377453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7db54baea45db7a59e1bdef61c76aff4e788c8af06af77c3c6fb30bcabfc9f8-rootfs.mount: Deactivated successfully. May 13 10:03:39.385589 kubelet[2652]: I0513 10:03:39.385161 2652 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 10:03:39.474435 systemd[1]: Created slice kubepods-besteffort-pod17cf1fb2_5969_495b_82ec_d1c2794c0ec1.slice - libcontainer container kubepods-besteffort-pod17cf1fb2_5969_495b_82ec_d1c2794c0ec1.slice. May 13 10:03:39.480823 systemd[1]: Created slice kubepods-besteffort-pod8094e0ea_b84d_445b_ae4b_c0c7f182d513.slice - libcontainer container kubepods-besteffort-pod8094e0ea_b84d_445b_ae4b_c0c7f182d513.slice. May 13 10:03:39.486377 systemd[1]: Created slice kubepods-burstable-podf3dfd6a8_9fad_4cbc_ac4f_5d413bd2328e.slice - libcontainer container kubepods-burstable-podf3dfd6a8_9fad_4cbc_ac4f_5d413bd2328e.slice. May 13 10:03:39.492589 kubelet[2652]: I0513 10:03:39.492155 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e-config-volume\") pod \"coredns-668d6bf9bc-9q7k6\" (UID: \"f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e\") " pod="kube-system/coredns-668d6bf9bc-9q7k6" May 13 10:03:39.492589 kubelet[2652]: I0513 10:03:39.492198 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrnc\" (UniqueName: \"kubernetes.io/projected/f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e-kube-api-access-tgrnc\") pod \"coredns-668d6bf9bc-9q7k6\" (UID: \"f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e\") " pod="kube-system/coredns-668d6bf9bc-9q7k6" May 13 10:03:39.492589 kubelet[2652]: I0513 10:03:39.492226 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttbr\" (UniqueName: \"kubernetes.io/projected/c2cd4944-68ef-4d1e-a1a0-b8c6846f8454-kube-api-access-2ttbr\") pod \"calico-kube-controllers-6bf6f57df4-rvbzz\" (UID: \"c2cd4944-68ef-4d1e-a1a0-b8c6846f8454\") " pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" May 13 10:03:39.492589 kubelet[2652]: I0513 10:03:39.492259 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859e52e9-cc24-4ee2-9387-8ea34429e6d6-config-volume\") pod \"coredns-668d6bf9bc-dq7ks\" (UID: \"859e52e9-cc24-4ee2-9387-8ea34429e6d6\") " pod="kube-system/coredns-668d6bf9bc-dq7ks" May 13 10:03:39.492589 kubelet[2652]: I0513 10:03:39.492293 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2cd4944-68ef-4d1e-a1a0-b8c6846f8454-tigera-ca-bundle\") pod \"calico-kube-controllers-6bf6f57df4-rvbzz\" (UID: \"c2cd4944-68ef-4d1e-a1a0-b8c6846f8454\") " pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" May 13 10:03:39.492383 systemd[1]: Created slice kubepods-burstable-pod859e52e9_cc24_4ee2_9387_8ea34429e6d6.slice - libcontainer container kubepods-burstable-pod859e52e9_cc24_4ee2_9387_8ea34429e6d6.slice. May 13 10:03:39.492896 kubelet[2652]: I0513 10:03:39.492318 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfss4\" (UniqueName: \"kubernetes.io/projected/8094e0ea-b84d-445b-ae4b-c0c7f182d513-kube-api-access-dfss4\") pod \"calico-apiserver-5fff58d5bc-rgbbb\" (UID: \"8094e0ea-b84d-445b-ae4b-c0c7f182d513\") " pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" May 13 10:03:39.492896 kubelet[2652]: I0513 10:03:39.492342 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17cf1fb2-5969-495b-82ec-d1c2794c0ec1-calico-apiserver-certs\") pod \"calico-apiserver-5fff58d5bc-tnz59\" (UID: \"17cf1fb2-5969-495b-82ec-d1c2794c0ec1\") " pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" May 13 10:03:39.492896 kubelet[2652]: I0513 10:03:39.492361 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84dv\" (UniqueName: \"kubernetes.io/projected/17cf1fb2-5969-495b-82ec-d1c2794c0ec1-kube-api-access-s84dv\") pod \"calico-apiserver-5fff58d5bc-tnz59\" (UID: \"17cf1fb2-5969-495b-82ec-d1c2794c0ec1\") " pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" May 13 10:03:39.492896 kubelet[2652]: I0513 10:03:39.492849 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8094e0ea-b84d-445b-ae4b-c0c7f182d513-calico-apiserver-certs\") pod \"calico-apiserver-5fff58d5bc-rgbbb\" (UID: \"8094e0ea-b84d-445b-ae4b-c0c7f182d513\") " pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" May 13 10:03:39.492990 kubelet[2652]: I0513 10:03:39.492912 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8rx\" (UniqueName: \"kubernetes.io/projected/859e52e9-cc24-4ee2-9387-8ea34429e6d6-kube-api-access-ms8rx\") pod \"coredns-668d6bf9bc-dq7ks\" (UID: \"859e52e9-cc24-4ee2-9387-8ea34429e6d6\") " pod="kube-system/coredns-668d6bf9bc-dq7ks" May 13 10:03:39.506646 systemd[1]: Created slice kubepods-besteffort-podc2cd4944_68ef_4d1e_a1a0_b8c6846f8454.slice - libcontainer container kubepods-besteffort-podc2cd4944_68ef_4d1e_a1a0_b8c6846f8454.slice. May 13 10:03:39.634571 containerd[1534]: time="2025-05-13T10:03:39.634428056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 10:03:39.780074 containerd[1534]: time="2025-05-13T10:03:39.780031996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-tnz59,Uid:17cf1fb2-5969-495b-82ec-d1c2794c0ec1,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:39.785060 containerd[1534]: time="2025-05-13T10:03:39.784845181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-rgbbb,Uid:8094e0ea-b84d-445b-ae4b-c0c7f182d513,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:39.792349 containerd[1534]: time="2025-05-13T10:03:39.792316877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q7k6,Uid:f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e,Namespace:kube-system,Attempt:0,}" May 13 10:03:39.813664 containerd[1534]: time="2025-05-13T10:03:39.812901812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf6f57df4-rvbzz,Uid:c2cd4944-68ef-4d1e-a1a0-b8c6846f8454,Namespace:calico-system,Attempt:0,}" May 13 10:03:39.813664 containerd[1534]: time="2025-05-13T10:03:39.813320531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7ks,Uid:859e52e9-cc24-4ee2-9387-8ea34429e6d6,Namespace:kube-system,Attempt:0,}" May 13 10:03:40.204243 containerd[1534]: time="2025-05-13T10:03:40.203827896Z" level=error msg="Failed to destroy network for sandbox \"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.211637 containerd[1534]: time="2025-05-13T10:03:40.211593673Z" level=error msg="Failed to destroy network for sandbox \"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.211916 containerd[1534]: time="2025-05-13T10:03:40.211878152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7ks,Uid:859e52e9-cc24-4ee2-9387-8ea34429e6d6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.213053 containerd[1534]: time="2025-05-13T10:03:40.213014549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-tnz59,Uid:17cf1fb2-5969-495b-82ec-d1c2794c0ec1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.215242 containerd[1534]: time="2025-05-13T10:03:40.215203303Z" level=error msg="Failed to destroy network for sandbox \"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.216788 kubelet[2652]: E0513 10:03:40.216731 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.217094 kubelet[2652]: E0513 10:03:40.216696 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.217728 containerd[1534]: time="2025-05-13T10:03:40.217666295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-rgbbb,Uid:8094e0ea-b84d-445b-ae4b-c0c7f182d513,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.217934 kubelet[2652]: E0513 10:03:40.217900 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.218930 containerd[1534]: time="2025-05-13T10:03:40.218150854Z" level=error msg="Failed to destroy network for sandbox \"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.219236 kubelet[2652]: E0513 10:03:40.219205 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" May 13 10:03:40.219340 kubelet[2652]: E0513 10:03:40.219323 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" May 13 10:03:40.219800 kubelet[2652]: E0513 10:03:40.219694 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" May 13 10:03:40.219800 kubelet[2652]: E0513 10:03:40.219734 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" May 13 10:03:40.219800 kubelet[2652]: E0513 10:03:40.219745 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fff58d5bc-rgbbb_calico-apiserver(8094e0ea-b84d-445b-ae4b-c0c7f182d513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fff58d5bc-rgbbb_calico-apiserver(8094e0ea-b84d-445b-ae4b-c0c7f182d513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0b9eb8232668ebc7bb0c2e4a6e9a057fda73e7d9b47ffba67f1ee9d1d8a9297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" podUID="8094e0ea-b84d-445b-ae4b-c0c7f182d513" May 13 10:03:40.219937 kubelet[2652]: E0513 10:03:40.219822 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fff58d5bc-tnz59_calico-apiserver(17cf1fb2-5969-495b-82ec-d1c2794c0ec1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fff58d5bc-tnz59_calico-apiserver(17cf1fb2-5969-495b-82ec-d1c2794c0ec1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e1b482abb76bafbafbe971946b6d08bb36188c7bf741b26365ec43d710caffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" podUID="17cf1fb2-5969-495b-82ec-d1c2794c0ec1" May 13 10:03:40.221290 kubelet[2652]: E0513 10:03:40.221233 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7ks" May 13 10:03:40.221290 kubelet[2652]: E0513 10:03:40.221271 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7ks" May 13 10:03:40.221446 containerd[1534]: time="2025-05-13T10:03:40.221412924Z" level=error msg="Failed to destroy network for sandbox \"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.221748 containerd[1534]: time="2025-05-13T10:03:40.221705443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf6f57df4-rvbzz,Uid:c2cd4944-68ef-4d1e-a1a0-b8c6846f8454,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.221852 kubelet[2652]: E0513 10:03:40.221650 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dq7ks_kube-system(859e52e9-cc24-4ee2-9387-8ea34429e6d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dq7ks_kube-system(859e52e9-cc24-4ee2-9387-8ea34429e6d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bc72d962cdd3eeae25b441bbebd62fac4e90380ebccd1db31e9d1b0b35f1192\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dq7ks" podUID="859e52e9-cc24-4ee2-9387-8ea34429e6d6" May 13 10:03:40.222026 kubelet[2652]: E0513 10:03:40.221952 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.222077 kubelet[2652]: E0513 10:03:40.222059 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" May 13 10:03:40.222111 kubelet[2652]: E0513 10:03:40.222097 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" May 13 10:03:40.222150 kubelet[2652]: E0513 10:03:40.222131 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bf6f57df4-rvbzz_calico-system(c2cd4944-68ef-4d1e-a1a0-b8c6846f8454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bf6f57df4-rvbzz_calico-system(c2cd4944-68ef-4d1e-a1a0-b8c6846f8454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe68445861d0f7420680113e4b4d6a429f93e30b3c29acad4beb962b5f6c2129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" podUID="c2cd4944-68ef-4d1e-a1a0-b8c6846f8454" May 13 10:03:40.222818 containerd[1534]: time="2025-05-13T10:03:40.222768160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q7k6,Uid:f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.223304 kubelet[2652]: E0513 10:03:40.223237 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.223355 kubelet[2652]: E0513 10:03:40.223304 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9q7k6" May 13 10:03:40.223355 kubelet[2652]: E0513 10:03:40.223320 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9q7k6" May 13 10:03:40.223407 kubelet[2652]: E0513 10:03:40.223347 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9q7k6_kube-system(f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9q7k6_kube-system(f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44054fd257df4be0218d0d6a45afb335a355278b17576c0a5c1c5ecc46671f6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9q7k6" podUID="f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e" May 13 10:03:40.543620 systemd[1]: Created slice kubepods-besteffort-pod3d59e185_488f_4c52_86e4_340ff54919cf.slice - libcontainer container kubepods-besteffort-pod3d59e185_488f_4c52_86e4_340ff54919cf.slice. May 13 10:03:40.546321 containerd[1534]: time="2025-05-13T10:03:40.546289802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqt2t,Uid:3d59e185-488f-4c52-86e4-340ff54919cf,Namespace:calico-system,Attempt:0,}" May 13 10:03:40.600240 containerd[1534]: time="2025-05-13T10:03:40.600107362Z" level=error msg="Failed to destroy network for sandbox \"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.601180 containerd[1534]: time="2025-05-13T10:03:40.601057000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqt2t,Uid:3d59e185-488f-4c52-86e4-340ff54919cf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.601335 kubelet[2652]: E0513 10:03:40.601296 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 10:03:40.601386 kubelet[2652]: E0513 10:03:40.601354 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:40.601386 kubelet[2652]: E0513 10:03:40.601374 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqt2t" May 13 10:03:40.601440 kubelet[2652]: E0513 10:03:40.601419 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hqt2t_calico-system(3d59e185-488f-4c52-86e4-340ff54919cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hqt2t_calico-system(3d59e185-488f-4c52-86e4-340ff54919cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f7de73cbf4474bc5d52887aba5267d6ceb4825149f20f8c0c0de0a9d443af0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hqt2t" podUID="3d59e185-488f-4c52-86e4-340ff54919cf" May 13 10:03:40.719418 systemd[1]: run-netns-cni\x2d3ddf69e3\x2dbda4\x2d2b0e\x2ddca0\x2d7dea1ae51e55.mount: Deactivated successfully. May 13 10:03:40.719511 systemd[1]: run-netns-cni\x2d7a93a4f1\x2df920\x2dcbd3\x2d908e\x2d257d381fb7ee.mount: Deactivated successfully. May 13 10:03:40.719556 systemd[1]: run-netns-cni\x2d88a4f906\x2dd754\x2d8ee2\x2d5c1b\x2d3b5c19408a25.mount: Deactivated successfully. May 13 10:03:40.719599 systemd[1]: run-netns-cni\x2d3008460c\x2dfef1\x2d9291\x2db41e\x2d8a3886a33ffc.mount: Deactivated successfully. May 13 10:03:43.794961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515959099.mount: Deactivated successfully. May 13 10:03:44.105230 containerd[1534]: time="2025-05-13T10:03:44.105098556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 10:03:44.109571 containerd[1534]: time="2025-05-13T10:03:44.109448186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.47492429s" May 13 10:03:44.109571 containerd[1534]: time="2025-05-13T10:03:44.109485746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 10:03:44.120197 containerd[1534]: time="2025-05-13T10:03:44.119690682Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 10:03:44.124979 containerd[1534]: time="2025-05-13T10:03:44.124914790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:44.127162 containerd[1534]: time="2025-05-13T10:03:44.127059545Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:44.130247 containerd[1534]: time="2025-05-13T10:03:44.130210258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:44.150260 containerd[1534]: time="2025-05-13T10:03:44.148427136Z" level=info msg="Container a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:44.152528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610355074.mount: Deactivated successfully. May 13 10:03:44.158569 containerd[1534]: time="2025-05-13T10:03:44.158514033Z" level=info msg="CreateContainer within sandbox \"70218cbead9dc3a445b96d3c266a0ac4257e9968ae41ced23c6cf7973200225e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\"" May 13 10:03:44.159036 containerd[1534]: time="2025-05-13T10:03:44.159011352Z" level=info msg="StartContainer for \"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\"" May 13 10:03:44.160537 containerd[1534]: time="2025-05-13T10:03:44.160483269Z" level=info msg="connecting to shim a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f" address="unix:///run/containerd/s/abd424d060709e298a666fb836bc673a35e6b60d5257a881a7fe0ded89f4acae" protocol=ttrpc version=3 May 13 10:03:44.178978 systemd[1]: Started cri-containerd-a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f.scope - libcontainer container a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f. May 13 10:03:44.221751 containerd[1534]: time="2025-05-13T10:03:44.221704409Z" level=info msg="StartContainer for \"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" returns successfully" May 13 10:03:44.392402 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 10:03:44.392503 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 10:03:44.666967 kubelet[2652]: I0513 10:03:44.666903 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4jm2j" podStartSLOduration=1.1924149179999999 podStartE2EDuration="15.66688695s" podCreationTimestamp="2025-05-13 10:03:29 +0000 UTC" firstStartedPulling="2025-05-13 10:03:29.635660352 +0000 UTC m=+13.193525582" lastFinishedPulling="2025-05-13 10:03:44.110132384 +0000 UTC m=+27.667997614" observedRunningTime="2025-05-13 10:03:44.666179112 +0000 UTC m=+28.224044422" watchObservedRunningTime="2025-05-13 10:03:44.66688695 +0000 UTC m=+28.224752180" May 13 10:03:45.488087 containerd[1534]: time="2025-05-13T10:03:45.488028661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" id:\"36b0ff0ad5c722a9f10a02e24131f370d72db67ae0a7688c4a2eb62785b1995d\" pid:3757 exit_status:1 exited_at:{seconds:1747130625 nanos:487705302}" May 13 10:03:45.549604 containerd[1534]: time="2025-05-13T10:03:45.549549049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" id:\"6472354efbb123b015a8809713566f6d3959dfe1f2681d29bafe7e01e6bf00d8\" pid:3781 exit_status:1 exited_at:{seconds:1747130625 nanos:549295889}" May 13 10:03:45.762193 containerd[1534]: time="2025-05-13T10:03:45.762035633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" id:\"38d674de0cd5fd9efeee7156e5df1bcfb091b0c1f1a87f2345f8a2f40196ae94\" pid:3898 exit_status:1 exited_at:{seconds:1747130625 nanos:761519554}" May 13 10:03:46.712942 containerd[1534]: time="2025-05-13T10:03:46.712905009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" id:\"cecc60eecdd530be252d3b8ccb67eb9348f3dc53de754d519526314d0d0650c3\" pid:3935 exit_status:1 exited_at:{seconds:1747130626 nanos:712601889}" May 13 10:03:47.685105 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:47552.service - OpenSSH per-connection server daemon (10.0.0.1:47552). May 13 10:03:47.756732 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 47552 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:03:47.758391 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:47.762751 systemd-logind[1515]: New session 8 of user core. May 13 10:03:47.772935 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 10:03:47.919834 sshd[3974]: Connection closed by 10.0.0.1 port 47552 May 13 10:03:47.919751 sshd-session[3972]: pam_unix(sshd:session): session closed for user core May 13 10:03:47.922511 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:47552.service: Deactivated successfully. May 13 10:03:47.924744 systemd[1]: session-8.scope: Deactivated successfully. May 13 10:03:47.926261 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. May 13 10:03:47.927980 systemd-logind[1515]: Removed session 8. May 13 10:03:52.537609 containerd[1534]: time="2025-05-13T10:03:52.537516452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf6f57df4-rvbzz,Uid:c2cd4944-68ef-4d1e-a1a0-b8c6846f8454,Namespace:calico-system,Attempt:0,}" May 13 10:03:52.773573 systemd-networkd[1441]: calicef2d515b51: Link UP May 13 10:03:52.774150 systemd-networkd[1441]: calicef2d515b51: Gained carrier May 13 10:03:52.785447 containerd[1534]: 2025-05-13 10:03:52.574 [INFO][4106] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 10:03:52.785447 containerd[1534]: 2025-05-13 10:03:52.629 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0 calico-kube-controllers-6bf6f57df4- calico-system c2cd4944-68ef-4d1e-a1a0-b8c6846f8454 667 0 2025-05-13 10:03:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bf6f57df4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bf6f57df4-rvbzz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicef2d515b51 [] []}} ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-" May 13 10:03:52.785447 containerd[1534]: 2025-05-13 10:03:52.629 [INFO][4106] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.785447 containerd[1534]: 2025-05-13 10:03:52.727 [INFO][4121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" HandleID="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Workload="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.740 [INFO][4121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" HandleID="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Workload="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bf6f57df4-rvbzz", "timestamp":"2025-05-13 10:03:52.727278716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.740 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.740 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.740 [INFO][4121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.742 [INFO][4121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" host="localhost" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.747 [INFO][4121] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.750 [INFO][4121] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.752 [INFO][4121] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.754 [INFO][4121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:52.785710 containerd[1534]: 2025-05-13 10:03:52.754 [INFO][4121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" host="localhost" May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.755 [INFO][4121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.759 [INFO][4121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" host="localhost" May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.764 [INFO][4121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" host="localhost" May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.764 [INFO][4121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" host="localhost" May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.764 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:52.786004 containerd[1534]: 2025-05-13 10:03:52.764 [INFO][4121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" HandleID="k8s-pod-network.40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Workload="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.786121 containerd[1534]: 2025-05-13 10:03:52.767 [INFO][4106] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0", GenerateName:"calico-kube-controllers-6bf6f57df4-", Namespace:"calico-system", SelfLink:"", UID:"c2cd4944-68ef-4d1e-a1a0-b8c6846f8454", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf6f57df4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bf6f57df4-rvbzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicef2d515b51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:52.788110 containerd[1534]: 2025-05-13 10:03:52.767 [INFO][4106] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.788110 containerd[1534]: 2025-05-13 10:03:52.767 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicef2d515b51 ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.788110 containerd[1534]: 2025-05-13 10:03:52.774 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.788192 containerd[1534]: 2025-05-13 10:03:52.774 [INFO][4106] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0", GenerateName:"calico-kube-controllers-6bf6f57df4-", Namespace:"calico-system", SelfLink:"", UID:"c2cd4944-68ef-4d1e-a1a0-b8c6846f8454", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf6f57df4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f", Pod:"calico-kube-controllers-6bf6f57df4-rvbzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicef2d515b51", MAC:"9e:84:70:6f:bf:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:52.788262 containerd[1534]: 2025-05-13 10:03:52.783 [INFO][4106] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" Namespace="calico-system" Pod="calico-kube-controllers-6bf6f57df4-rvbzz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf6f57df4--rvbzz-eth0" May 13 10:03:52.922982 containerd[1534]: time="2025-05-13T10:03:52.922860702Z" level=info msg="connecting to shim 40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f" address="unix:///run/containerd/s/a92710e94312841bd791f82b255a5a5019de381b73b0fd287d8811111d15d80a" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:52.935558 kubelet[2652]: I0513 10:03:52.935083 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:03:52.935393 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:45262.service - OpenSSH per-connection server daemon (10.0.0.1:45262). May 13 10:03:52.949036 systemd[1]: Started cri-containerd-40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f.scope - libcontainer container 40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f. May 13 10:03:52.975373 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:53.001586 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 45262 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:03:53.002629 containerd[1534]: time="2025-05-13T10:03:53.002586843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf6f57df4-rvbzz,Uid:c2cd4944-68ef-4d1e-a1a0-b8c6846f8454,Namespace:calico-system,Attempt:0,} returns sandbox id \"40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f\"" May 13 10:03:53.003176 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:53.008734 systemd-logind[1515]: New session 9 of user core. May 13 10:03:53.012527 containerd[1534]: time="2025-05-13T10:03:53.012470833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 10:03:53.015953 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 10:03:53.178557 sshd[4195]: Connection closed by 10.0.0.1 port 45262 May 13 10:03:53.178933 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 13 10:03:53.183215 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:45262.service: Deactivated successfully. May 13 10:03:53.186137 systemd[1]: session-9.scope: Deactivated successfully. May 13 10:03:53.187073 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. May 13 10:03:53.188704 systemd-logind[1515]: Removed session 9. May 13 10:03:53.536950 containerd[1534]: time="2025-05-13T10:03:53.536812391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-rgbbb,Uid:8094e0ea-b84d-445b-ae4b-c0c7f182d513,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:53.687207 systemd-networkd[1441]: calif295953573c: Link UP May 13 10:03:53.687899 systemd-networkd[1441]: calif295953573c: Gained carrier May 13 10:03:53.700038 containerd[1534]: 2025-05-13 10:03:53.600 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 10:03:53.700038 containerd[1534]: 2025-05-13 10:03:53.612 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0 calico-apiserver-5fff58d5bc- calico-apiserver 8094e0ea-b84d-445b-ae4b-c0c7f182d513 664 0 2025-05-13 10:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fff58d5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fff58d5bc-rgbbb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif295953573c [] []}} ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-" May 13 10:03:53.700038 containerd[1534]: 2025-05-13 10:03:53.612 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700038 containerd[1534]: 2025-05-13 10:03:53.641 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" HandleID="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.653 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" HandleID="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fff58d5bc-rgbbb", "timestamp":"2025-05-13 10:03:53.641341453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.653 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.653 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.653 [INFO][4266] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.655 [INFO][4266] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" host="localhost" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.659 [INFO][4266] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.663 [INFO][4266] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.665 [INFO][4266] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.667 [INFO][4266] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:53.700435 containerd[1534]: 2025-05-13 10:03:53.667 [INFO][4266] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" host="localhost" May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.668 [INFO][4266] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8 May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.674 [INFO][4266] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" host="localhost" May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.682 [INFO][4266] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" host="localhost" May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.682 [INFO][4266] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" host="localhost" May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.682 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:53.700628 containerd[1534]: 2025-05-13 10:03:53.682 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" HandleID="k8s-pod-network.33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700734 containerd[1534]: 2025-05-13 10:03:53.684 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0", GenerateName:"calico-apiserver-5fff58d5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8094e0ea-b84d-445b-ae4b-c0c7f182d513", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff58d5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fff58d5bc-rgbbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif295953573c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:53.700801 containerd[1534]: 2025-05-13 10:03:53.685 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700801 containerd[1534]: 2025-05-13 10:03:53.685 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif295953573c ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700801 containerd[1534]: 2025-05-13 10:03:53.688 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.700907 containerd[1534]: 2025-05-13 10:03:53.688 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0", GenerateName:"calico-apiserver-5fff58d5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"8094e0ea-b84d-445b-ae4b-c0c7f182d513", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff58d5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8", Pod:"calico-apiserver-5fff58d5bc-rgbbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif295953573c", MAC:"36:fd:66:b8:fd:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:53.701024 containerd[1534]: 2025-05-13 10:03:53.697 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-rgbbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--rgbbb-eth0" May 13 10:03:53.720925 containerd[1534]: time="2025-05-13T10:03:53.720881097Z" level=info msg="connecting to shim 33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8" address="unix:///run/containerd/s/99621d033ee92c4e5f7e7e90269d4fc7e9d814d9c98950fe8063d37dff22405c" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:53.745943 systemd[1]: Started cri-containerd-33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8.scope - libcontainer container 33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8. May 13 10:03:53.757830 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:53.789235 containerd[1534]: time="2025-05-13T10:03:53.789134141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-rgbbb,Uid:8094e0ea-b84d-445b-ae4b-c0c7f182d513,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8\"" May 13 10:03:54.040975 systemd-networkd[1441]: calicef2d515b51: Gained IPv6LL May 13 10:03:54.207806 systemd-networkd[1441]: vxlan.calico: Link UP May 13 10:03:54.207814 systemd-networkd[1441]: vxlan.calico: Gained carrier May 13 10:03:54.537448 containerd[1534]: time="2025-05-13T10:03:54.537229502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-tnz59,Uid:17cf1fb2-5969-495b-82ec-d1c2794c0ec1,Namespace:calico-apiserver,Attempt:0,}" May 13 10:03:54.661258 systemd-networkd[1441]: cali72f75bb178a: Link UP May 13 10:03:54.661604 systemd-networkd[1441]: cali72f75bb178a: Gained carrier May 13 10:03:54.678145 containerd[1534]: 2025-05-13 10:03:54.585 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0 calico-apiserver-5fff58d5bc- calico-apiserver 17cf1fb2-5969-495b-82ec-d1c2794c0ec1 662 0 2025-05-13 10:03:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fff58d5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fff58d5bc-tnz59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72f75bb178a [] []}} ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-" May 13 10:03:54.678145 containerd[1534]: 2025-05-13 10:03:54.585 [INFO][4427] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.678145 containerd[1534]: 2025-05-13 10:03:54.612 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" HandleID="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.623 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" HandleID="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fff58d5bc-tnz59", "timestamp":"2025-05-13 10:03:54.612741903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.623 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.623 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.623 [INFO][4441] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.625 [INFO][4441] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" host="localhost" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.629 [INFO][4441] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.635 [INFO][4441] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.637 [INFO][4441] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.643 [INFO][4441] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:54.678552 containerd[1534]: 2025-05-13 10:03:54.643 [INFO][4441] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" host="localhost" May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.644 [INFO][4441] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.649 [INFO][4441] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" host="localhost" May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.655 [INFO][4441] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" host="localhost" May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.656 [INFO][4441] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" host="localhost" May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.656 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:54.679202 containerd[1534]: 2025-05-13 10:03:54.656 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" HandleID="k8s-pod-network.1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Workload="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.679328 containerd[1534]: 2025-05-13 10:03:54.659 [INFO][4427] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0", GenerateName:"calico-apiserver-5fff58d5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"17cf1fb2-5969-495b-82ec-d1c2794c0ec1", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff58d5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fff58d5bc-tnz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f75bb178a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:54.679392 containerd[1534]: 2025-05-13 10:03:54.659 [INFO][4427] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.679392 containerd[1534]: 2025-05-13 10:03:54.659 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72f75bb178a ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.679392 containerd[1534]: 2025-05-13 10:03:54.662 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.679460 containerd[1534]: 2025-05-13 10:03:54.662 [INFO][4427] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0", GenerateName:"calico-apiserver-5fff58d5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"17cf1fb2-5969-495b-82ec-d1c2794c0ec1", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff58d5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb", Pod:"calico-apiserver-5fff58d5bc-tnz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f75bb178a", MAC:"9a:e5:a8:5d:5c:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:54.679503 containerd[1534]: 2025-05-13 10:03:54.673 [INFO][4427] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" Namespace="calico-apiserver" Pod="calico-apiserver-5fff58d5bc-tnz59" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fff58d5bc--tnz59-eth0" May 13 10:03:54.716625 containerd[1534]: time="2025-05-13T10:03:54.716427178Z" level=info msg="connecting to shim 1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb" address="unix:///run/containerd/s/8cde8e267e899b9ebe4bc5381a9050f53f3df732b5117b6c7c2754cb7abd1628" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:54.748000 systemd[1]: Started cri-containerd-1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb.scope - libcontainer container 1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb. May 13 10:03:54.763832 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:54.789021 containerd[1534]: time="2025-05-13T10:03:54.788917478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff58d5bc-tnz59,Uid:17cf1fb2-5969-495b-82ec-d1c2794c0ec1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb\"" May 13 10:03:55.003490 containerd[1534]: time="2025-05-13T10:03:55.003439637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:55.004193 containerd[1534]: time="2025-05-13T10:03:55.003977521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 10:03:55.004945 containerd[1534]: time="2025-05-13T10:03:55.004905047Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:55.007299 containerd[1534]: time="2025-05-13T10:03:55.007249823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:55.008111 containerd[1534]: time="2025-05-13T10:03:55.008083869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.995560716s" May 13 10:03:55.008202 containerd[1534]: time="2025-05-13T10:03:55.008187429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 10:03:55.010044 containerd[1534]: time="2025-05-13T10:03:55.010012122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 10:03:55.026824 containerd[1534]: time="2025-05-13T10:03:55.026471352Z" level=info msg="CreateContainer within sandbox \"40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 10:03:55.033394 containerd[1534]: time="2025-05-13T10:03:55.033366078Z" level=info msg="Container 509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:55.041584 containerd[1534]: time="2025-05-13T10:03:55.041329212Z" level=info msg="CreateContainer within sandbox \"40fe06a512f1b06f54255e7f23b1d0175a8f90c5449630ef77a51ea3f28b767f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\"" May 13 10:03:55.045125 containerd[1534]: time="2025-05-13T10:03:55.045091237Z" level=info msg="StartContainer for \"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\"" May 13 10:03:55.046303 containerd[1534]: time="2025-05-13T10:03:55.046160324Z" level=info msg="connecting to shim 509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5" address="unix:///run/containerd/s/a92710e94312841bd791f82b255a5a5019de381b73b0fd287d8811111d15d80a" protocol=ttrpc version=3 May 13 10:03:55.070015 systemd[1]: Started cri-containerd-509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5.scope - libcontainer container 509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5. May 13 10:03:55.107193 containerd[1534]: time="2025-05-13T10:03:55.107155134Z" level=info msg="StartContainer for \"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\" returns successfully" May 13 10:03:55.193734 systemd-networkd[1441]: calif295953573c: Gained IPv6LL May 13 10:03:55.385125 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL May 13 10:03:55.537481 containerd[1534]: time="2025-05-13T10:03:55.537403261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqt2t,Uid:3d59e185-488f-4c52-86e4-340ff54919cf,Namespace:calico-system,Attempt:0,}" May 13 10:03:55.537941 containerd[1534]: time="2025-05-13T10:03:55.537902664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7ks,Uid:859e52e9-cc24-4ee2-9387-8ea34429e6d6,Namespace:kube-system,Attempt:0,}" May 13 10:03:55.538004 containerd[1534]: time="2025-05-13T10:03:55.537933904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q7k6,Uid:f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e,Namespace:kube-system,Attempt:0,}" May 13 10:03:55.696272 systemd-networkd[1441]: cali2f58823bf4e: Link UP May 13 10:03:55.696432 systemd-networkd[1441]: cali2f58823bf4e: Gained carrier May 13 10:03:55.705043 systemd-networkd[1441]: cali72f75bb178a: Gained IPv6LL May 13 10:03:55.706948 kubelet[2652]: I0513 10:03:55.706859 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bf6f57df4-rvbzz" podStartSLOduration=24.709778392 podStartE2EDuration="26.706762197s" podCreationTimestamp="2025-05-13 10:03:29 +0000 UTC" firstStartedPulling="2025-05-13 10:03:53.01208455 +0000 UTC m=+36.569949780" lastFinishedPulling="2025-05-13 10:03:55.009068315 +0000 UTC m=+38.566933585" observedRunningTime="2025-05-13 10:03:55.702102326 +0000 UTC m=+39.259967516" watchObservedRunningTime="2025-05-13 10:03:55.706762197 +0000 UTC m=+39.264627427" May 13 10:03:55.712228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899241130.mount: Deactivated successfully. May 13 10:03:55.717858 containerd[1534]: 2025-05-13 10:03:55.602 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hqt2t-eth0 csi-node-driver- calico-system 3d59e185-488f-4c52-86e4-340ff54919cf 580 0 2025-05-13 10:03:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hqt2t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2f58823bf4e [] []}} ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-" May 13 10:03:55.717858 containerd[1534]: 2025-05-13 10:03:55.602 [INFO][4551] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.717858 containerd[1534]: 2025-05-13 10:03:55.650 [INFO][4596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" HandleID="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Workload="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.666 [INFO][4596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" HandleID="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Workload="localhost-k8s-csi--node--driver--hqt2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000467da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hqt2t", "timestamp":"2025-05-13 10:03:55.650755422 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.666 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.666 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.666 [INFO][4596] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.668 [INFO][4596] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" host="localhost" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.672 [INFO][4596] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.676 [INFO][4596] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.678 [INFO][4596] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.680 [INFO][4596] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.718208 containerd[1534]: 2025-05-13 10:03:55.680 [INFO][4596] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" host="localhost" May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.682 [INFO][4596] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235 May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.686 [INFO][4596] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" host="localhost" May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.691 [INFO][4596] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" host="localhost" May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.691 [INFO][4596] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" host="localhost" May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.691 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:55.718424 containerd[1534]: 2025-05-13 10:03:55.692 [INFO][4596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" HandleID="k8s-pod-network.30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Workload="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.718586 containerd[1534]: 2025-05-13 10:03:55.694 [INFO][4551] cni-plugin/k8s.go 386: Populated endpoint ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hqt2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d59e185-488f-4c52-86e4-340ff54919cf", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hqt2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f58823bf4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.718586 containerd[1534]: 2025-05-13 10:03:55.694 [INFO][4551] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.718665 containerd[1534]: 2025-05-13 10:03:55.695 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f58823bf4e ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.718665 containerd[1534]: 2025-05-13 10:03:55.696 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.718707 containerd[1534]: 2025-05-13 10:03:55.696 [INFO][4551] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hqt2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d59e185-488f-4c52-86e4-340ff54919cf", ResourceVersion:"580", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235", Pod:"csi-node-driver-hqt2t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2f58823bf4e", MAC:"c6:3c:74:33:8e:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.718752 containerd[1534]: 2025-05-13 10:03:55.714 [INFO][4551] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" Namespace="calico-system" Pod="csi-node-driver-hqt2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqt2t-eth0" May 13 10:03:55.746002 containerd[1534]: time="2025-05-13T10:03:55.745934940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\" id:\"a49a741aed0823bae5a8a400a61e2e5715471ef945c71797542621169eceec40\" pid:4642 exited_at:{seconds:1747130635 nanos:745629858}" May 13 10:03:55.776586 containerd[1534]: time="2025-05-13T10:03:55.776468465Z" level=info msg="connecting to shim 30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235" address="unix:///run/containerd/s/a5e8a4bccf576dec7b1621c8c2a74f274f6337f17b9f8c64834e7187d1e212a1" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:55.800069 systemd[1]: Started cri-containerd-30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235.scope - libcontainer container 30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235. May 13 10:03:55.808941 systemd-networkd[1441]: cali6b302959df7: Link UP May 13 10:03:55.809502 systemd-networkd[1441]: cali6b302959df7: Gained carrier May 13 10:03:55.816525 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:55.825983 containerd[1534]: 2025-05-13 10:03:55.601 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0 coredns-668d6bf9bc- kube-system 859e52e9-cc24-4ee2-9387-8ea34429e6d6 666 0 2025-05-13 10:03:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dq7ks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b302959df7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-" May 13 10:03:55.825983 containerd[1534]: 2025-05-13 10:03:55.602 [INFO][4563] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.825983 containerd[1534]: 2025-05-13 10:03:55.651 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" HandleID="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Workload="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.670 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" HandleID="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Workload="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dq7ks", "timestamp":"2025-05-13 10:03:55.651009703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.670 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.691 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.692 [INFO][4598] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.769 [INFO][4598] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" host="localhost" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.774 [INFO][4598] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.780 [INFO][4598] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.784 [INFO][4598] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.787 [INFO][4598] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.826154 containerd[1534]: 2025-05-13 10:03:55.787 [INFO][4598] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" host="localhost" May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.789 [INFO][4598] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.793 [INFO][4598] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" host="localhost" May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4598] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" host="localhost" May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4598] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" host="localhost" May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:55.826413 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" HandleID="k8s-pod-network.9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Workload="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.826522 containerd[1534]: 2025-05-13 10:03:55.806 [INFO][4563] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"859e52e9-cc24-4ee2-9387-8ea34429e6d6", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dq7ks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b302959df7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.826571 containerd[1534]: 2025-05-13 10:03:55.807 [INFO][4563] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.826571 containerd[1534]: 2025-05-13 10:03:55.807 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b302959df7 ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.826571 containerd[1534]: 2025-05-13 10:03:55.809 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.826629 containerd[1534]: 2025-05-13 10:03:55.810 [INFO][4563] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"859e52e9-cc24-4ee2-9387-8ea34429e6d6", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a", Pod:"coredns-668d6bf9bc-dq7ks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b302959df7", MAC:"da:a8:3e:2e:de:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.826629 containerd[1534]: 2025-05-13 10:03:55.823 [INFO][4563] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dq7ks-eth0" May 13 10:03:55.831810 containerd[1534]: time="2025-05-13T10:03:55.831750516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqt2t,Uid:3d59e185-488f-4c52-86e4-340ff54919cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235\"" May 13 10:03:55.854315 containerd[1534]: time="2025-05-13T10:03:55.854275707Z" level=info msg="connecting to shim 9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a" address="unix:///run/containerd/s/9f2d3ee4c929a30df22d9d8c64a3450de53045782bb983e8cb83fa0a543abad3" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:55.878935 systemd[1]: Started cri-containerd-9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a.scope - libcontainer container 9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a. May 13 10:03:55.893812 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:55.904481 systemd-networkd[1441]: calie6dc8188a15: Link UP May 13 10:03:55.905133 systemd-networkd[1441]: calie6dc8188a15: Gained carrier May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.605 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0 coredns-668d6bf9bc- kube-system f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e 665 0 2025-05-13 10:03:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9q7k6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie6dc8188a15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.605 [INFO][4571] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.659 [INFO][4600] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" HandleID="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Workload="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.672 [INFO][4600] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" HandleID="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Workload="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400045f8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9q7k6", "timestamp":"2025-05-13 10:03:55.659328879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.672 [INFO][4600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.802 [INFO][4600] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.869 [INFO][4600] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.875 [INFO][4600] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.881 [INFO][4600] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.883 [INFO][4600] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.885 [INFO][4600] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.885 [INFO][4600] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.887 [INFO][4600] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109 May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.891 [INFO][4600] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.899 [INFO][4600] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.899 [INFO][4600] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" host="localhost" May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.899 [INFO][4600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 10:03:55.923489 containerd[1534]: 2025-05-13 10:03:55.899 [INFO][4600] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" HandleID="k8s-pod-network.bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Workload="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.901 [INFO][4571] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9q7k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6dc8188a15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.901 [INFO][4571] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.901 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6dc8188a15 ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.905 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.905 [INFO][4571] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 10, 3, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109", Pod:"coredns-668d6bf9bc-9q7k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6dc8188a15", MAC:"26:f9:0c:5b:f7:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 10:03:55.924084 containerd[1534]: 2025-05-13 10:03:55.918 [INFO][4571] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q7k6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q7k6-eth0" May 13 10:03:55.928864 containerd[1534]: time="2025-05-13T10:03:55.928744407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7ks,Uid:859e52e9-cc24-4ee2-9387-8ea34429e6d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a\"" May 13 10:03:55.931563 containerd[1534]: time="2025-05-13T10:03:55.931529986Z" level=info msg="CreateContainer within sandbox \"9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:03:55.943802 containerd[1534]: time="2025-05-13T10:03:55.943408266Z" level=info msg="Container 126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:55.950751 containerd[1534]: time="2025-05-13T10:03:55.950654314Z" level=info msg="CreateContainer within sandbox \"9800b9dbd8f071000a3c73e1d0c438d841e9ccbcb71a062f136c37efe815a82a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e\"" May 13 10:03:55.953261 containerd[1534]: time="2025-05-13T10:03:55.953217731Z" level=info msg="StartContainer for \"126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e\"" May 13 10:03:55.956017 containerd[1534]: time="2025-05-13T10:03:55.955358626Z" level=info msg="connecting to shim 126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e" address="unix:///run/containerd/s/9f2d3ee4c929a30df22d9d8c64a3450de53045782bb983e8cb83fa0a543abad3" protocol=ttrpc version=3 May 13 10:03:55.971474 containerd[1534]: time="2025-05-13T10:03:55.970996851Z" level=info msg="connecting to shim bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109" address="unix:///run/containerd/s/83f585583f3634aa18f3bfa3fb6012702852eb0150cf14e366d48aea5d1d1769" namespace=k8s.io protocol=ttrpc version=3 May 13 10:03:55.975994 systemd[1]: Started cri-containerd-126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e.scope - libcontainer container 126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e. May 13 10:03:55.998547 systemd[1]: Started cri-containerd-bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109.scope - libcontainer container bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109. May 13 10:03:56.011401 containerd[1534]: time="2025-05-13T10:03:56.009732269Z" level=info msg="StartContainer for \"126c51ff9ead83655350dd86cf8aec14105a557576ac3c78e8a300f35e1cf87e\" returns successfully" May 13 10:03:56.022672 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 10:03:56.051666 containerd[1534]: time="2025-05-13T10:03:56.051618742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q7k6,Uid:f3dfd6a8-9fad-4cbc-ac4f-5d413bd2328e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109\"" May 13 10:03:56.054137 containerd[1534]: time="2025-05-13T10:03:56.054080079Z" level=info msg="CreateContainer within sandbox \"bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 10:03:56.062725 containerd[1534]: time="2025-05-13T10:03:56.062685295Z" level=info msg="Container ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:56.070402 containerd[1534]: time="2025-05-13T10:03:56.070358305Z" level=info msg="CreateContainer within sandbox \"bb1169d56c0fca9cff6ae1c283c0a585c5c4757e318054a1348cacf49e461109\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5\"" May 13 10:03:56.070882 containerd[1534]: time="2025-05-13T10:03:56.070826148Z" level=info msg="StartContainer for \"ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5\"" May 13 10:03:56.071727 containerd[1534]: time="2025-05-13T10:03:56.071641633Z" level=info msg="connecting to shim ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5" address="unix:///run/containerd/s/83f585583f3634aa18f3bfa3fb6012702852eb0150cf14e366d48aea5d1d1769" protocol=ttrpc version=3 May 13 10:03:56.093961 systemd[1]: Started cri-containerd-ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5.scope - libcontainer container ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5. May 13 10:03:56.127997 containerd[1534]: time="2025-05-13T10:03:56.127958241Z" level=info msg="StartContainer for \"ddb8ca060d2039f002b4915e36a01d955015b2ac4b1bc48c508f87a3ec992ed5\" returns successfully" May 13 10:03:56.723828 kubelet[2652]: I0513 10:03:56.723733 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9q7k6" podStartSLOduration=34.723715691 podStartE2EDuration="34.723715691s" podCreationTimestamp="2025-05-13 10:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:56.722203201 +0000 UTC m=+40.280068431" watchObservedRunningTime="2025-05-13 10:03:56.723715691 +0000 UTC m=+40.281580921" May 13 10:03:56.744107 kubelet[2652]: I0513 10:03:56.743915 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dq7ks" podStartSLOduration=34.743730342 podStartE2EDuration="34.743730342s" podCreationTimestamp="2025-05-13 10:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 10:03:56.74198593 +0000 UTC m=+40.299851160" watchObservedRunningTime="2025-05-13 10:03:56.743730342 +0000 UTC m=+40.301595572" May 13 10:03:57.305924 systemd-networkd[1441]: calie6dc8188a15: Gained IPv6LL May 13 10:03:57.433955 systemd-networkd[1441]: cali2f58823bf4e: Gained IPv6LL May 13 10:03:57.491505 containerd[1534]: time="2025-05-13T10:03:57.491451138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:57.492686 containerd[1534]: time="2025-05-13T10:03:57.492462624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 10:03:57.493469 containerd[1534]: time="2025-05-13T10:03:57.493436070Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:57.495561 containerd[1534]: time="2025-05-13T10:03:57.495521163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:57.496359 containerd[1534]: time="2025-05-13T10:03:57.496335049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.486293127s" May 13 10:03:57.496359 containerd[1534]: time="2025-05-13T10:03:57.496360889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 10:03:57.506874 containerd[1534]: time="2025-05-13T10:03:57.506699675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 10:03:57.517677 containerd[1534]: time="2025-05-13T10:03:57.517637824Z" level=info msg="CreateContainer within sandbox \"33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 10:03:57.523881 containerd[1534]: time="2025-05-13T10:03:57.523732303Z" level=info msg="Container f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:57.529571 containerd[1534]: time="2025-05-13T10:03:57.529537580Z" level=info msg="CreateContainer within sandbox \"33b162f301f7cbc1ba517759b6824e83900ee268262f462ad60f139f413d4fb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597\"" May 13 10:03:57.530689 containerd[1534]: time="2025-05-13T10:03:57.530030623Z" level=info msg="StartContainer for \"f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597\"" May 13 10:03:57.531185 containerd[1534]: time="2025-05-13T10:03:57.531154950Z" level=info msg="connecting to shim f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597" address="unix:///run/containerd/s/99621d033ee92c4e5f7e7e90269d4fc7e9d814d9c98950fe8063d37dff22405c" protocol=ttrpc version=3 May 13 10:03:57.579978 systemd[1]: Started cri-containerd-f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597.scope - libcontainer container f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597. May 13 10:03:57.625275 containerd[1534]: time="2025-05-13T10:03:57.625230828Z" level=info msg="StartContainer for \"f3f087cb23d90eb85ef161276cb33c5bac1b02dce607a6369423b85a7d40b597\" returns successfully" May 13 10:03:57.726831 kubelet[2652]: I0513 10:03:57.725855 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fff58d5bc-rgbbb" podStartSLOduration=26.010352148 podStartE2EDuration="29.725838027s" podCreationTimestamp="2025-05-13 10:03:28 +0000 UTC" firstStartedPulling="2025-05-13 10:03:53.790847833 +0000 UTC m=+37.348713063" lastFinishedPulling="2025-05-13 10:03:57.506333712 +0000 UTC m=+41.064198942" observedRunningTime="2025-05-13 10:03:57.724548579 +0000 UTC m=+41.282413889" watchObservedRunningTime="2025-05-13 10:03:57.725838027 +0000 UTC m=+41.283703297" May 13 10:03:57.792975 containerd[1534]: time="2025-05-13T10:03:57.792912893Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:57.794933 containerd[1534]: time="2025-05-13T10:03:57.794894026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 10:03:57.797492 containerd[1534]: time="2025-05-13T10:03:57.797449482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 290.626407ms" May 13 10:03:57.797557 containerd[1534]: time="2025-05-13T10:03:57.797499162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 10:03:57.798528 containerd[1534]: time="2025-05-13T10:03:57.798493369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 10:03:57.800602 containerd[1534]: time="2025-05-13T10:03:57.800560142Z" level=info msg="CreateContainer within sandbox \"1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 10:03:57.810802 containerd[1534]: time="2025-05-13T10:03:57.808739314Z" level=info msg="Container 3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:57.815071 containerd[1534]: time="2025-05-13T10:03:57.815025234Z" level=info msg="CreateContainer within sandbox \"1949d1ff97aa0cb00201fb8faa49351b30275db6afcabd084e22da28362aa2bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595\"" May 13 10:03:57.815577 containerd[1534]: time="2025-05-13T10:03:57.815539477Z" level=info msg="StartContainer for \"3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595\"" May 13 10:03:57.816753 containerd[1534]: time="2025-05-13T10:03:57.816705844Z" level=info msg="connecting to shim 3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595" address="unix:///run/containerd/s/8cde8e267e899b9ebe4bc5381a9050f53f3df732b5117b6c7c2754cb7abd1628" protocol=ttrpc version=3 May 13 10:03:57.817300 systemd-networkd[1441]: cali6b302959df7: Gained IPv6LL May 13 10:03:57.845947 systemd[1]: Started cri-containerd-3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595.scope - libcontainer container 3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595. May 13 10:03:57.890592 containerd[1534]: time="2025-05-13T10:03:57.890545193Z" level=info msg="StartContainer for \"3e2ad4fc32e873c500e96ab0f013de3b0e61b7c18aaee30c49e19148202f9595\" returns successfully" May 13 10:03:58.199211 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). May 13 10:03:58.258870 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:03:58.260448 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:58.266491 systemd-logind[1515]: New session 10 of user core. May 13 10:03:58.275956 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 10:03:58.478173 sshd[4975]: Connection closed by 10.0.0.1 port 45270 May 13 10:03:58.478613 sshd-session[4973]: pam_unix(sshd:session): session closed for user core May 13 10:03:58.492768 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:45270.service: Deactivated successfully. May 13 10:03:58.495539 systemd[1]: session-10.scope: Deactivated successfully. May 13 10:03:58.497209 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. May 13 10:03:58.499896 systemd-logind[1515]: Removed session 10. May 13 10:03:58.502190 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:45272.service - OpenSSH per-connection server daemon (10.0.0.1:45272). May 13 10:03:58.558424 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 45272 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:03:58.560007 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:58.564434 systemd-logind[1515]: New session 11 of user core. May 13 10:03:58.570941 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 10:03:58.723204 kubelet[2652]: I0513 10:03:58.722899 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:03:58.909674 sshd[4991]: Connection closed by 10.0.0.1 port 45272 May 13 10:03:58.910884 sshd-session[4989]: pam_unix(sshd:session): session closed for user core May 13 10:03:58.931234 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:45272.service: Deactivated successfully. May 13 10:03:58.937236 systemd[1]: session-11.scope: Deactivated successfully. May 13 10:03:58.941932 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. May 13 10:03:58.944651 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:45286.service - OpenSSH per-connection server daemon (10.0.0.1:45286). May 13 10:03:58.947819 systemd-logind[1515]: Removed session 11. May 13 10:03:58.999417 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 45286 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:03:59.001025 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:03:59.007647 systemd-logind[1515]: New session 12 of user core. May 13 10:03:59.016119 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 10:03:59.217377 sshd[5013]: Connection closed by 10.0.0.1 port 45286 May 13 10:03:59.217500 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 13 10:03:59.224426 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:45286.service: Deactivated successfully. May 13 10:03:59.228699 systemd[1]: session-12.scope: Deactivated successfully. May 13 10:03:59.231631 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. May 13 10:03:59.234652 systemd-logind[1515]: Removed session 12. May 13 10:03:59.650590 containerd[1534]: time="2025-05-13T10:03:59.650472988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:59.651077 containerd[1534]: time="2025-05-13T10:03:59.651040791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 10:03:59.653395 containerd[1534]: time="2025-05-13T10:03:59.653363365Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:59.654812 containerd[1534]: time="2025-05-13T10:03:59.654766253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:03:59.655422 containerd[1534]: time="2025-05-13T10:03:59.655361097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.856831808s" May 13 10:03:59.657143 containerd[1534]: time="2025-05-13T10:03:59.656821866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 10:03:59.674328 containerd[1534]: time="2025-05-13T10:03:59.674286371Z" level=info msg="CreateContainer within sandbox \"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 10:03:59.687970 containerd[1534]: time="2025-05-13T10:03:59.687926413Z" level=info msg="Container 05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff: CDI devices from CRI Config.CDIDevices: []" May 13 10:03:59.698091 containerd[1534]: time="2025-05-13T10:03:59.698050954Z" level=info msg="CreateContainer within sandbox \"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff\"" May 13 10:03:59.699356 containerd[1534]: time="2025-05-13T10:03:59.699321802Z" level=info msg="StartContainer for \"05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff\"" May 13 10:03:59.701714 containerd[1534]: time="2025-05-13T10:03:59.701628975Z" level=info msg="connecting to shim 05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff" address="unix:///run/containerd/s/a5e8a4bccf576dec7b1621c8c2a74f274f6337f17b9f8c64834e7187d1e212a1" protocol=ttrpc version=3 May 13 10:03:59.730046 systemd[1]: Started cri-containerd-05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff.scope - libcontainer container 05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff. May 13 10:03:59.732882 kubelet[2652]: I0513 10:03:59.732853 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:03:59.795723 containerd[1534]: time="2025-05-13T10:03:59.794410574Z" level=info msg="StartContainer for \"05b91c6d5d60e9cdc2632c4185fd1e15b14c459e7199e04fe86733562bfd5bff\" returns successfully" May 13 10:03:59.796268 containerd[1534]: time="2025-05-13T10:03:59.796208065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 10:04:01.323588 kubelet[2652]: I0513 10:04:01.321459 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 10:04:01.380847 kubelet[2652]: I0513 10:04:01.379612 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fff58d5bc-tnz59" podStartSLOduration=30.373397488 podStartE2EDuration="33.379596196s" podCreationTimestamp="2025-05-13 10:03:28 +0000 UTC" firstStartedPulling="2025-05-13 10:03:54.79214086 +0000 UTC m=+38.350006090" lastFinishedPulling="2025-05-13 10:03:57.798339568 +0000 UTC m=+41.356204798" observedRunningTime="2025-05-13 10:03:58.751440496 +0000 UTC m=+42.309305726" watchObservedRunningTime="2025-05-13 10:04:01.379596196 +0000 UTC m=+44.937461426" May 13 10:04:01.557125 containerd[1534]: time="2025-05-13T10:04:01.557076088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:04:01.564794 containerd[1534]: time="2025-05-13T10:04:01.557926853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 10:04:01.564794 containerd[1534]: time="2025-05-13T10:04:01.558958539Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:04:01.565186 containerd[1534]: time="2025-05-13T10:04:01.561083991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.764839606s" May 13 10:04:01.565186 containerd[1534]: time="2025-05-13T10:04:01.565087734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 10:04:01.566115 containerd[1534]: time="2025-05-13T10:04:01.566084739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 10:04:01.567586 containerd[1534]: time="2025-05-13T10:04:01.567549588Z" level=info msg="CreateContainer within sandbox \"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 10:04:01.575849 containerd[1534]: time="2025-05-13T10:04:01.575736754Z" level=info msg="Container 0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd: CDI devices from CRI Config.CDIDevices: []" May 13 10:04:01.585794 containerd[1534]: time="2025-05-13T10:04:01.585731411Z" level=info msg="CreateContainer within sandbox \"30b1f0a04877f484c563b839ec1e08523eeb1f121d89d8fc0e753e5b9bedd235\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd\"" May 13 10:04:01.587812 containerd[1534]: time="2025-05-13T10:04:01.587712943Z" level=info msg="StartContainer for \"0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd\"" May 13 10:04:01.589730 containerd[1534]: time="2025-05-13T10:04:01.589388672Z" level=info msg="connecting to shim 0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd" address="unix:///run/containerd/s/a5e8a4bccf576dec7b1621c8c2a74f274f6337f17b9f8c64834e7187d1e212a1" protocol=ttrpc version=3 May 13 10:04:01.611974 systemd[1]: Started cri-containerd-0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd.scope - libcontainer container 0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd. May 13 10:04:01.646435 containerd[1534]: time="2025-05-13T10:04:01.646381877Z" level=info msg="StartContainer for \"0e4117384d84d02adcff84f3c7ebaac34d30fca05058dab6f9542b6b5fe371bd\" returns successfully" May 13 10:04:01.755416 kubelet[2652]: I0513 10:04:01.755356 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hqt2t" podStartSLOduration=27.022324805 podStartE2EDuration="32.755342259s" podCreationTimestamp="2025-05-13 10:03:29 +0000 UTC" firstStartedPulling="2025-05-13 10:03:55.833351367 +0000 UTC m=+39.391216597" lastFinishedPulling="2025-05-13 10:04:01.566368861 +0000 UTC m=+45.124234051" observedRunningTime="2025-05-13 10:04:01.754134332 +0000 UTC m=+45.311999562" watchObservedRunningTime="2025-05-13 10:04:01.755342259 +0000 UTC m=+45.313207489" May 13 10:04:02.621655 kubelet[2652]: I0513 10:04:02.621609 2652 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 10:04:02.622858 kubelet[2652]: I0513 10:04:02.622830 2652 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 10:04:04.233346 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:43672.service - OpenSSH per-connection server daemon (10.0.0.1:43672). May 13 10:04:04.295149 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:04.296604 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:04.301047 systemd-logind[1515]: New session 13 of user core. May 13 10:04:04.307935 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 10:04:04.481975 sshd[5111]: Connection closed by 10.0.0.1 port 43672 May 13 10:04:04.482502 sshd-session[5109]: pam_unix(sshd:session): session closed for user core May 13 10:04:04.495063 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:43672.service: Deactivated successfully. May 13 10:04:04.498399 systemd[1]: session-13.scope: Deactivated successfully. May 13 10:04:04.499499 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. May 13 10:04:04.502153 systemd-logind[1515]: Removed session 13. May 13 10:04:04.504315 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). May 13 10:04:04.551136 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:04.552523 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:04.556847 systemd-logind[1515]: New session 14 of user core. May 13 10:04:04.565027 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 10:04:04.772273 sshd[5133]: Connection closed by 10.0.0.1 port 43676 May 13 10:04:04.773042 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 13 10:04:04.785183 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:43676.service: Deactivated successfully. May 13 10:04:04.787804 systemd[1]: session-14.scope: Deactivated successfully. May 13 10:04:04.789064 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. May 13 10:04:04.791820 systemd-logind[1515]: Removed session 14. May 13 10:04:04.793319 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:43684.service - OpenSSH per-connection server daemon (10.0.0.1:43684). May 13 10:04:04.839570 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 43684 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:04.840812 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:04.845430 systemd-logind[1515]: New session 15 of user core. May 13 10:04:04.861951 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 10:04:05.603775 sshd[5147]: Connection closed by 10.0.0.1 port 43684 May 13 10:04:05.604446 sshd-session[5145]: pam_unix(sshd:session): session closed for user core May 13 10:04:05.613607 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:43684.service: Deactivated successfully. May 13 10:04:05.615114 systemd[1]: session-15.scope: Deactivated successfully. May 13 10:04:05.616847 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. May 13 10:04:05.620099 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:43692.service - OpenSSH per-connection server daemon (10.0.0.1:43692). May 13 10:04:05.622790 systemd-logind[1515]: Removed session 15. May 13 10:04:05.676701 sshd[5166]: Accepted publickey for core from 10.0.0.1 port 43692 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:05.677835 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:05.681654 systemd-logind[1515]: New session 16 of user core. May 13 10:04:05.689976 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 10:04:05.977810 sshd[5168]: Connection closed by 10.0.0.1 port 43692 May 13 10:04:05.978223 sshd-session[5166]: pam_unix(sshd:session): session closed for user core May 13 10:04:05.988753 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:43692.service: Deactivated successfully. May 13 10:04:05.991543 systemd[1]: session-16.scope: Deactivated successfully. May 13 10:04:05.994160 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. May 13 10:04:05.996034 systemd-logind[1515]: Removed session 16. May 13 10:04:05.997845 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:43702.service - OpenSSH per-connection server daemon (10.0.0.1:43702). May 13 10:04:06.053678 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 43702 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:06.055013 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:06.059709 systemd-logind[1515]: New session 17 of user core. May 13 10:04:06.066958 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 10:04:06.197552 sshd[5183]: Connection closed by 10.0.0.1 port 43702 May 13 10:04:06.198039 sshd-session[5181]: pam_unix(sshd:session): session closed for user core May 13 10:04:06.201551 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:43702.service: Deactivated successfully. May 13 10:04:06.203513 systemd[1]: session-17.scope: Deactivated successfully. May 13 10:04:06.204347 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. May 13 10:04:06.205620 systemd-logind[1515]: Removed session 17. May 13 10:04:11.221877 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:43712.service - OpenSSH per-connection server daemon (10.0.0.1:43712). May 13 10:04:11.262482 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 43712 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:11.263625 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:11.267502 systemd-logind[1515]: New session 18 of user core. May 13 10:04:11.273952 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 10:04:11.395299 sshd[5198]: Connection closed by 10.0.0.1 port 43712 May 13 10:04:11.395592 sshd-session[5196]: pam_unix(sshd:session): session closed for user core May 13 10:04:11.399104 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:43712.service: Deactivated successfully. May 13 10:04:11.400861 systemd[1]: session-18.scope: Deactivated successfully. May 13 10:04:11.401610 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. May 13 10:04:11.402649 systemd-logind[1515]: Removed session 18. May 13 10:04:16.408493 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:58756.service - OpenSSH per-connection server daemon (10.0.0.1:58756). May 13 10:04:16.464730 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 58756 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:16.466137 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:16.470309 containerd[1534]: time="2025-05-13T10:04:16.470271823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\" id:\"df3b82db21322d917a2babae2908e4118dacbd7790fbac5ec37d3e82ab9eec67\" pid:5237 exited_at:{seconds:1747130656 nanos:470055502}" May 13 10:04:16.473419 systemd-logind[1515]: New session 19 of user core. May 13 10:04:16.479037 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 10:04:16.626705 sshd[5246]: Connection closed by 10.0.0.1 port 58756 May 13 10:04:16.627019 sshd-session[5223]: pam_unix(sshd:session): session closed for user core May 13 10:04:16.630513 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:58756.service: Deactivated successfully. May 13 10:04:16.633337 systemd[1]: session-19.scope: Deactivated successfully. May 13 10:04:16.634153 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. May 13 10:04:16.635320 systemd-logind[1515]: Removed session 19. May 13 10:04:16.721432 containerd[1534]: time="2025-05-13T10:04:16.721382678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3cbc1fb1bba5f08866c3131516ebac2401297df1b60b83733be4a95f0d1a87f\" id:\"95a5286a92f06e67855ca1b3c46405e4940f693abfa8942daf0f2281f4b23178\" pid:5272 exited_at:{seconds:1747130656 nanos:721103637}" May 13 10:04:21.642543 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:58758.service - OpenSSH per-connection server daemon (10.0.0.1:58758). May 13 10:04:21.702614 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 58758 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:21.706005 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:21.710694 systemd-logind[1515]: New session 20 of user core. May 13 10:04:21.719989 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 10:04:21.928834 sshd[5291]: Connection closed by 10.0.0.1 port 58758 May 13 10:04:21.930019 sshd-session[5289]: pam_unix(sshd:session): session closed for user core May 13 10:04:21.933082 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:58758.service: Deactivated successfully. May 13 10:04:21.935004 systemd[1]: session-20.scope: Deactivated successfully. May 13 10:04:21.936471 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. May 13 10:04:21.938034 systemd-logind[1515]: Removed session 20. May 13 10:04:25.719547 containerd[1534]: time="2025-05-13T10:04:25.719503036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"509bcb38efec94980967e8648cd0a9a36e693f3dee59e80e72f933456cf50ee5\" id:\"aa588c3a8ab2eb104afd454a3b2269b2e8245e61adfa7a96bce5aa8beaf3278c\" pid:5319 exited_at:{seconds:1747130665 nanos:719289755}" May 13 10:04:26.941488 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:35658.service - OpenSSH per-connection server daemon (10.0.0.1:35658). May 13 10:04:26.988823 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:2d1zHQ2g2EPeQ2if9c89VeQqUVEn4QIf2x3hXF5Pcvw May 13 10:04:26.990645 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 10:04:26.995563 systemd-logind[1515]: New session 21 of user core. May 13 10:04:27.005943 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 10:04:27.171808 sshd[5334]: Connection closed by 10.0.0.1 port 35658 May 13 10:04:27.172293 sshd-session[5332]: pam_unix(sshd:session): session closed for user core May 13 10:04:27.175942 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:35658.service: Deactivated successfully. May 13 10:04:27.177895 systemd[1]: session-21.scope: Deactivated successfully. May 13 10:04:27.178727 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. May 13 10:04:27.180015 systemd-logind[1515]: Removed session 21.