May 14 18:00:26.807939 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 18:00:26.807965 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 18:00:26.807976 kernel: KASLR enabled May 14 18:00:26.807981 kernel: efi: EFI v2.7 by EDK II May 14 18:00:26.807987 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 14 18:00:26.807992 kernel: random: crng init done May 14 18:00:26.807999 kernel: secureboot: Secure boot disabled May 14 18:00:26.808005 kernel: ACPI: Early table checksum verification disabled May 14 18:00:26.808011 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 14 18:00:26.808018 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 18:00:26.808024 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808030 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808036 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808042 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808049 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808056 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808063 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808069 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808075 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:00:26.808081 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 18:00:26.808087 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 18:00:26.808093 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:26.808099 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 14 18:00:26.808106 kernel: Zone ranges: May 14 18:00:26.808112 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:26.808119 kernel: DMA32 empty May 14 18:00:26.808125 kernel: Normal empty May 14 18:00:26.808131 kernel: Device empty May 14 18:00:26.808138 kernel: Movable zone start for each node May 14 18:00:26.808143 kernel: Early memory node ranges May 14 18:00:26.808150 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 14 18:00:26.808156 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 14 18:00:26.808162 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 14 18:00:26.808168 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 14 18:00:26.808174 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 14 18:00:26.808180 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 14 18:00:26.808186 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 14 18:00:26.808193 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 14 18:00:26.808288 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 14 18:00:26.808296 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 18:00:26.808307 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 18:00:26.808314 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 18:00:26.808320 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 18:00:26.808328 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 18:00:26.808334 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 18:00:26.808341 kernel: psci: probing for conduit method from ACPI. May 14 18:00:26.808347 kernel: psci: PSCIv1.1 detected in firmware. May 14 18:00:26.808494 kernel: psci: Using standard PSCI v0.2 function IDs May 14 18:00:26.808548 kernel: psci: Trusted OS migration not required May 14 18:00:26.808588 kernel: psci: SMC Calling Convention v1.1 May 14 18:00:26.808597 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 18:00:26.808603 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 18:00:26.808610 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 18:00:26.808622 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 18:00:26.808628 kernel: Detected PIPT I-cache on CPU0 May 14 18:00:26.808635 kernel: CPU features: detected: GIC system register CPU interface May 14 18:00:26.808641 kernel: CPU features: detected: Spectre-v4 May 14 18:00:26.808648 kernel: CPU features: detected: Spectre-BHB May 14 18:00:26.808654 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 18:00:26.808661 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 18:00:26.808668 kernel: CPU features: detected: ARM erratum 1418040 May 14 18:00:26.808674 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 18:00:26.808680 kernel: alternatives: applying boot alternatives May 14 18:00:26.808688 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:00:26.808697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:00:26.808704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:00:26.808710 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:00:26.808717 kernel: Fallback order for Node 0: 0 May 14 18:00:26.808724 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 18:00:26.808730 kernel: Policy zone: DMA May 14 18:00:26.808743 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:00:26.808750 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 18:00:26.808756 kernel: software IO TLB: area num 4. May 14 18:00:26.808763 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 18:00:26.808770 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 14 18:00:26.808776 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:00:26.808785 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:00:26.808792 kernel: rcu: RCU event tracing is enabled. May 14 18:00:26.808798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:00:26.808805 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:00:26.808811 kernel: Tracing variant of Tasks RCU enabled. May 14 18:00:26.808818 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:00:26.808824 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:00:26.808913 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:00:26.808924 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:00:26.808931 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 18:00:26.808937 kernel: GICv3: 256 SPIs implemented May 14 18:00:26.808947 kernel: GICv3: 0 Extended SPIs implemented May 14 18:00:26.808954 kernel: Root IRQ handler: gic_handle_irq May 14 18:00:26.808960 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 18:00:26.808967 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 18:00:26.808973 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 18:00:26.809042 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 18:00:26.809050 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 18:00:26.809057 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 18:00:26.809064 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 18:00:26.809071 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 18:00:26.809077 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:00:26.809085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:26.809094 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 18:00:26.809101 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 18:00:26.809108 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 18:00:26.809115 kernel: arm-pv: using stolen time PV May 14 18:00:26.809122 kernel: Console: colour dummy device 80x25 May 14 18:00:26.809129 kernel: ACPI: Core revision 20240827 May 14 18:00:26.809136 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 18:00:26.809142 kernel: pid_max: default: 32768 minimum: 301 May 14 18:00:26.809149 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:00:26.809157 kernel: landlock: Up and running. May 14 18:00:26.809164 kernel: SELinux: Initializing. May 14 18:00:26.809171 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:00:26.809177 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:00:26.809184 kernel: rcu: Hierarchical SRCU implementation. May 14 18:00:26.809191 kernel: rcu: Max phase no-delay instances is 400. May 14 18:00:26.809198 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:00:26.809205 kernel: Remapping and enabling EFI services. May 14 18:00:26.809211 kernel: smp: Bringing up secondary CPUs ... May 14 18:00:26.809218 kernel: Detected PIPT I-cache on CPU1 May 14 18:00:26.809231 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 18:00:26.809239 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 18:00:26.809247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:26.809254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 18:00:26.809261 kernel: Detected PIPT I-cache on CPU2 May 14 18:00:26.809269 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 18:00:26.809276 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 18:00:26.809284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:26.809291 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 18:00:26.809298 kernel: Detected PIPT I-cache on CPU3 May 14 18:00:26.809305 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 18:00:26.809313 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 18:00:26.809320 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 18:00:26.809327 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 18:00:26.809333 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:00:26.809340 kernel: SMP: Total of 4 processors activated. May 14 18:00:26.809347 kernel: CPU: All CPU(s) started at EL1 May 14 18:00:26.809356 kernel: CPU features: detected: 32-bit EL0 Support May 14 18:00:26.809363 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 18:00:26.809383 kernel: CPU features: detected: Common not Private translations May 14 18:00:26.809390 kernel: CPU features: detected: CRC32 instructions May 14 18:00:26.809398 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 18:00:26.809405 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 18:00:26.809412 kernel: CPU features: detected: LSE atomic instructions May 14 18:00:26.809420 kernel: CPU features: detected: Privileged Access Never May 14 18:00:26.809427 kernel: CPU features: detected: RAS Extension Support May 14 18:00:26.809437 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 18:00:26.809444 kernel: alternatives: applying system-wide alternatives May 14 18:00:26.809451 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 18:00:26.809459 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 14 18:00:26.809467 kernel: devtmpfs: initialized May 14 18:00:26.809474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:00:26.809481 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:00:26.809488 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 18:00:26.809496 kernel: 0 pages in range for non-PLT usage May 14 18:00:26.809504 kernel: 508544 pages in range for PLT usage May 14 18:00:26.809511 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:00:26.809518 kernel: SMBIOS 3.0.0 present. May 14 18:00:26.809525 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 18:00:26.809533 kernel: DMI: Memory slots populated: 1/1 May 14 18:00:26.809540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:00:26.809547 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 18:00:26.809555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 18:00:26.809562 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 18:00:26.809571 kernel: audit: initializing netlink subsys (disabled) May 14 18:00:26.809578 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 May 14 18:00:26.809585 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:00:26.809592 kernel: cpuidle: using governor menu May 14 18:00:26.809599 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 18:00:26.809607 kernel: ASID allocator initialised with 32768 entries May 14 18:00:26.809614 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:00:26.809621 kernel: Serial: AMBA PL011 UART driver May 14 18:00:26.809628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:00:26.809637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:00:26.809644 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 18:00:26.809651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 18:00:26.809658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:00:26.809665 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:00:26.809672 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 18:00:26.809679 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 18:00:26.809686 kernel: ACPI: Added _OSI(Module Device) May 14 18:00:26.809694 kernel: ACPI: Added _OSI(Processor Device) May 14 18:00:26.809702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:00:26.809709 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:00:26.809717 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:00:26.809724 kernel: ACPI: Interpreter enabled May 14 18:00:26.809731 kernel: ACPI: Using GIC for interrupt routing May 14 18:00:26.809745 kernel: ACPI: MCFG table detected, 1 entries May 14 18:00:26.809752 kernel: ACPI: CPU0 has been hot-added May 14 18:00:26.809759 kernel: ACPI: CPU1 has been hot-added May 14 18:00:26.809766 kernel: ACPI: CPU2 has been hot-added May 14 18:00:26.809775 kernel: ACPI: CPU3 has been hot-added May 14 18:00:26.809783 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 18:00:26.809790 kernel: printk: legacy console [ttyAMA0] enabled May 14 18:00:26.809797 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:00:26.809951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:00:26.810021 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 18:00:26.810083 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 18:00:26.810144 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 18:00:26.810206 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 18:00:26.810215 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 18:00:26.810223 kernel: PCI host bridge to bus 0000:00 May 14 18:00:26.810289 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 18:00:26.810349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 18:00:26.810437 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 18:00:26.810495 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:00:26.810576 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 18:00:26.810651 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:00:26.810716 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 18:00:26.810789 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 18:00:26.810854 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 18:00:26.810918 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 18:00:26.810980 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 18:00:26.811046 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 18:00:26.811102 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 18:00:26.811157 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 18:00:26.811213 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 18:00:26.811222 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 18:00:26.811230 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 18:00:26.811237 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 18:00:26.811246 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 18:00:26.811253 kernel: iommu: Default domain type: Translated May 14 18:00:26.811261 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 18:00:26.811268 kernel: efivars: Registered efivars operations May 14 18:00:26.811275 kernel: vgaarb: loaded May 14 18:00:26.811282 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 18:00:26.811289 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:00:26.811296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:00:26.811304 kernel: pnp: PnP ACPI init May 14 18:00:26.811393 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 18:00:26.811405 kernel: pnp: PnP ACPI: found 1 devices May 14 18:00:26.811412 kernel: NET: Registered PF_INET protocol family May 14 18:00:26.811420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:00:26.811427 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:00:26.811434 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:00:26.811442 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:00:26.811449 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:00:26.811458 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:00:26.811466 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:00:26.811473 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:00:26.811480 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:00:26.811487 kernel: PCI: CLS 0 bytes, default 64 May 14 18:00:26.811495 kernel: kvm [1]: HYP mode not available May 14 18:00:26.811502 kernel: Initialise system trusted keyrings May 14 18:00:26.811509 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:00:26.811516 kernel: Key type asymmetric registered May 14 18:00:26.811525 kernel: Asymmetric key parser 'x509' registered May 14 18:00:26.811532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 18:00:26.811539 kernel: io scheduler mq-deadline registered May 14 18:00:26.811546 kernel: io scheduler kyber registered May 14 18:00:26.811553 kernel: io scheduler bfq registered May 14 18:00:26.811561 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 18:00:26.811568 kernel: ACPI: button: Power Button [PWRB] May 14 18:00:26.811576 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 18:00:26.811642 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 18:00:26.811654 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:00:26.811661 kernel: thunder_xcv, ver 1.0 May 14 18:00:26.811668 kernel: thunder_bgx, ver 1.0 May 14 18:00:26.811675 kernel: nicpf, ver 1.0 May 14 18:00:26.811682 kernel: nicvf, ver 1.0 May 14 18:00:26.811767 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 18:00:26.811832 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T18:00:26 UTC (1747245626) May 14 18:00:26.811847 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 18:00:26.811856 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 18:00:26.811864 kernel: watchdog: NMI not fully supported May 14 18:00:26.811871 kernel: watchdog: Hard watchdog permanently disabled May 14 18:00:26.811878 kernel: NET: Registered PF_INET6 protocol family May 14 18:00:26.811888 kernel: Segment Routing with IPv6 May 14 18:00:26.811895 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:00:26.811904 kernel: NET: Registered PF_PACKET protocol family May 14 18:00:26.811913 kernel: Key type dns_resolver registered May 14 18:00:26.811920 kernel: registered taskstats version 1 May 14 18:00:26.811929 kernel: Loading compiled-in X.509 certificates May 14 18:00:26.811936 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 18:00:26.811944 kernel: Demotion targets for Node 0: null May 14 18:00:26.811951 kernel: Key type .fscrypt registered May 14 18:00:26.811958 kernel: Key type fscrypt-provisioning registered May 14 18:00:26.811965 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:00:26.811972 kernel: ima: Allocated hash algorithm: sha1 May 14 18:00:26.811979 kernel: ima: No architecture policies found May 14 18:00:26.811986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 18:00:26.811995 kernel: clk: Disabling unused clocks May 14 18:00:26.812002 kernel: PM: genpd: Disabling unused power domains May 14 18:00:26.812009 kernel: Warning: unable to open an initial console. May 14 18:00:26.812019 kernel: Freeing unused kernel memory: 39424K May 14 18:00:26.812026 kernel: Run /init as init process May 14 18:00:26.812033 kernel: with arguments: May 14 18:00:26.812040 kernel: /init May 14 18:00:26.812047 kernel: with environment: May 14 18:00:26.812055 kernel: HOME=/ May 14 18:00:26.812067 kernel: TERM=linux May 14 18:00:26.812074 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:00:26.812083 systemd[1]: Successfully made /usr/ read-only. May 14 18:00:26.812094 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:00:26.812102 systemd[1]: Detected virtualization kvm. May 14 18:00:26.812110 systemd[1]: Detected architecture arm64. May 14 18:00:26.812117 systemd[1]: Running in initrd. May 14 18:00:26.812125 systemd[1]: No hostname configured, using default hostname. May 14 18:00:26.812134 systemd[1]: Hostname set to . May 14 18:00:26.812142 systemd[1]: Initializing machine ID from VM UUID. May 14 18:00:26.812150 systemd[1]: Queued start job for default target initrd.target. May 14 18:00:26.812157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:26.812165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:26.812174 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:00:26.812182 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:00:26.812189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:00:26.812200 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:00:26.812209 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:00:26.812218 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:00:26.812227 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:26.812234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:26.812242 systemd[1]: Reached target paths.target - Path Units. May 14 18:00:26.812251 systemd[1]: Reached target slices.target - Slice Units. May 14 18:00:26.812260 systemd[1]: Reached target swap.target - Swaps. May 14 18:00:26.812267 systemd[1]: Reached target timers.target - Timer Units. May 14 18:00:26.812275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:00:26.812283 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:00:26.812291 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:00:26.812299 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:00:26.812306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:26.812315 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:00:26.812324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:26.812331 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:00:26.812339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:00:26.812347 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:00:26.812354 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:00:26.812363 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:00:26.812378 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:00:26.812387 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:00:26.812397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:00:26.812404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:26.812412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:26.812420 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:00:26.812428 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:00:26.812438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:00:26.812466 systemd-journald[243]: Collecting audit messages is disabled. May 14 18:00:26.812486 systemd-journald[243]: Journal started May 14 18:00:26.812506 systemd-journald[243]: Runtime Journal (/run/log/journal/c5e3064ef98f40999dca5444f87b514e) is 6M, max 48.5M, 42.4M free. May 14 18:00:26.800936 systemd-modules-load[244]: Inserted module 'overlay' May 14 18:00:26.819582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:26.819603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:00:26.822737 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:00:26.822770 kernel: Bridge firewalling registered May 14 18:00:26.822760 systemd-modules-load[244]: Inserted module 'br_netfilter' May 14 18:00:26.823977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:00:26.825451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:00:26.830790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:00:26.832665 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:00:26.834637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:00:26.840109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:00:26.848138 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:26.848585 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:00:26.849525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:26.851669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:26.854974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:00:26.861444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:00:26.867991 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:00:26.885364 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 18:00:26.901494 systemd-resolved[285]: Positive Trust Anchors: May 14 18:00:26.901511 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:00:26.901542 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:00:26.906409 systemd-resolved[285]: Defaulting to hostname 'linux'. May 14 18:00:26.907446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:00:26.911394 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:26.969408 kernel: SCSI subsystem initialized May 14 18:00:26.974432 kernel: Loading iSCSI transport class v2.0-870. May 14 18:00:26.983406 kernel: iscsi: registered transport (tcp) May 14 18:00:26.997694 kernel: iscsi: registered transport (qla4xxx) May 14 18:00:26.997760 kernel: QLogic iSCSI HBA Driver May 14 18:00:27.014971 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:00:27.037864 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:27.039576 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:00:27.086659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:00:27.089083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:00:27.158398 kernel: raid6: neonx8 gen() 15773 MB/s May 14 18:00:27.175394 kernel: raid6: neonx4 gen() 15738 MB/s May 14 18:00:27.192385 kernel: raid6: neonx2 gen() 13151 MB/s May 14 18:00:27.209400 kernel: raid6: neonx1 gen() 10442 MB/s May 14 18:00:27.226387 kernel: raid6: int64x8 gen() 6883 MB/s May 14 18:00:27.243393 kernel: raid6: int64x4 gen() 7333 MB/s May 14 18:00:27.260389 kernel: raid6: int64x2 gen() 6093 MB/s May 14 18:00:27.277385 kernel: raid6: int64x1 gen() 5034 MB/s May 14 18:00:27.277399 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s May 14 18:00:27.294392 kernel: raid6: .... xor() 12020 MB/s, rmw enabled May 14 18:00:27.294413 kernel: raid6: using neon recovery algorithm May 14 18:00:27.299498 kernel: xor: measuring software checksum speed May 14 18:00:27.299526 kernel: 8regs : 21107 MB/sec May 14 18:00:27.300575 kernel: 32regs : 21681 MB/sec May 14 18:00:27.300586 kernel: arm64_neon : 27304 MB/sec May 14 18:00:27.300595 kernel: xor: using function: arm64_neon (27304 MB/sec) May 14 18:00:27.355415 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:00:27.361505 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:00:27.364094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:27.401340 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 14 18:00:27.405426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:27.407987 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:00:27.430101 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 14 18:00:27.453848 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:00:27.456425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:00:27.504212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:27.506642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:00:27.561955 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 18:00:27.580748 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:00:27.580875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:00:27.580887 kernel: GPT:9289727 != 19775487 May 14 18:00:27.580897 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:00:27.580905 kernel: GPT:9289727 != 19775487 May 14 18:00:27.580914 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:00:27.580923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:27.569222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:00:27.569348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:27.571569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:27.578932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:27.610626 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:00:27.611977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:27.623330 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:00:27.630699 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:00:27.631665 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:00:27.641223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:00:27.648933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:00:27.650103 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:00:27.652166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:27.654346 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:00:27.657245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:00:27.659207 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:00:27.673364 disk-uuid[590]: Primary Header is updated. May 14 18:00:27.673364 disk-uuid[590]: Secondary Entries is updated. May 14 18:00:27.673364 disk-uuid[590]: Secondary Header is updated. May 14 18:00:27.677397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:27.680017 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:00:28.687396 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:00:28.687451 disk-uuid[593]: The operation has completed successfully. May 14 18:00:28.713186 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:00:28.713288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:00:28.737861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:00:28.755519 sh[609]: Success May 14 18:00:28.767540 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:00:28.767575 kernel: device-mapper: uevent: version 1.0.3 May 14 18:00:28.768392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:00:28.786348 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 18:00:28.828199 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:00:28.830320 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:00:28.844185 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:00:28.852028 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:00:28.852063 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (622) May 14 18:00:28.854008 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 18:00:28.854029 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:28.854039 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:00:28.858363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:00:28.859704 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:00:28.861179 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:00:28.862113 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:00:28.865558 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:00:28.894391 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (657) May 14 18:00:28.896405 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:28.896424 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:28.896434 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:28.905402 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:28.907307 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:00:28.909397 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:00:28.970416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:00:28.974212 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:00:29.024046 systemd-networkd[799]: lo: Link UP May 14 18:00:29.024060 systemd-networkd[799]: lo: Gained carrier May 14 18:00:29.024844 systemd-networkd[799]: Enumeration completed May 14 18:00:29.024979 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:00:29.025694 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:29.025697 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:00:29.026049 systemd[1]: Reached target network.target - Network. May 14 18:00:29.026773 systemd-networkd[799]: eth0: Link UP May 14 18:00:29.026776 systemd-networkd[799]: eth0: Gained carrier May 14 18:00:29.026786 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:29.049446 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:00:29.057215 ignition[706]: Ignition 2.21.0 May 14 18:00:29.057230 ignition[706]: Stage: fetch-offline May 14 18:00:29.057268 ignition[706]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:29.057276 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:29.057488 ignition[706]: parsed url from cmdline: "" May 14 18:00:29.057494 ignition[706]: no config URL provided May 14 18:00:29.057498 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:00:29.057506 ignition[706]: no config at "/usr/lib/ignition/user.ign" May 14 18:00:29.057525 ignition[706]: op(1): [started] loading QEMU firmware config module May 14 18:00:29.057530 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:00:29.066974 ignition[706]: op(1): [finished] loading QEMU firmware config module May 14 18:00:29.106351 ignition[706]: parsing config with SHA512: 7c0d4cba227457923debc2e61723ecfa376702db43de87cc019d7f731d99a734be57aefccb0da46ad1128c492f1b83aa60dd5f933f5f8ea3144abcdeaa6030f5 May 14 18:00:29.113123 unknown[706]: fetched base config from "system" May 14 18:00:29.113138 unknown[706]: fetched user config from "qemu" May 14 18:00:29.113612 ignition[706]: fetch-offline: fetch-offline passed May 14 18:00:29.113679 ignition[706]: Ignition finished successfully May 14 18:00:29.116848 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:00:29.118272 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:00:29.119106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:00:29.153913 ignition[812]: Ignition 2.21.0 May 14 18:00:29.153930 ignition[812]: Stage: kargs May 14 18:00:29.154077 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:29.154086 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:29.155640 ignition[812]: kargs: kargs passed May 14 18:00:29.157820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:00:29.155697 ignition[812]: Ignition finished successfully May 14 18:00:29.159882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:00:29.186158 ignition[820]: Ignition 2.21.0 May 14 18:00:29.186177 ignition[820]: Stage: disks May 14 18:00:29.186349 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 14 18:00:29.186359 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:29.187919 ignition[820]: disks: disks passed May 14 18:00:29.191015 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:00:29.187985 ignition[820]: Ignition finished successfully May 14 18:00:29.192617 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:00:29.194441 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:00:29.196188 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:00:29.198076 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:00:29.200112 systemd[1]: Reached target basic.target - Basic System. May 14 18:00:29.202785 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:00:29.242115 systemd-resolved[285]: Detected conflict on linux IN A 10.0.0.61 May 14 18:00:29.242133 systemd-resolved[285]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. May 14 18:00:29.245055 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:00:29.246895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:00:29.249715 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:00:29.326395 kernel: EXT4-fs (vda9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 18:00:29.326683 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:00:29.327841 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:00:29.330244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:00:29.332131 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:00:29.333215 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:00:29.333279 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:00:29.333306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:00:29.348040 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:00:29.351523 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (839) May 14 18:00:29.350751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:00:29.355457 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:29.355478 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:29.355488 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:29.360164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:00:29.408142 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:00:29.411543 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory May 14 18:00:29.414802 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:00:29.417938 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:00:29.500761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:00:29.502934 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:00:29.504606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:00:29.521390 kernel: BTRFS info (device vda6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:29.533324 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:00:29.541545 ignition[953]: INFO : Ignition 2.21.0 May 14 18:00:29.541545 ignition[953]: INFO : Stage: mount May 14 18:00:29.543218 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:29.543218 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:29.543218 ignition[953]: INFO : mount: mount passed May 14 18:00:29.543218 ignition[953]: INFO : Ignition finished successfully May 14 18:00:29.544886 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:00:29.547133 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:00:29.851445 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:00:29.853048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:00:29.872391 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (965) May 14 18:00:29.874553 kernel: BTRFS info (device vda6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 18:00:29.874568 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 18:00:29.874578 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:00:29.878334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:00:29.905487 ignition[982]: INFO : Ignition 2.21.0 May 14 18:00:29.905487 ignition[982]: INFO : Stage: files May 14 18:00:29.907136 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:29.907136 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:29.907136 ignition[982]: DEBUG : files: compiled without relabeling support, skipping May 14 18:00:29.910738 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:00:29.910738 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:00:29.910738 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:00:29.910738 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:00:29.910738 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:00:29.910278 unknown[982]: wrote ssh authorized keys file for user: core May 14 18:00:29.918612 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:00:29.918612 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 18:00:30.015137 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:00:30.146485 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:00:30.148572 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:00:30.162691 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 18:00:30.570091 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:00:30.950622 systemd-networkd[799]: eth0: Gained IPv6LL May 14 18:00:31.286781 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 18:00:31.286781 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 18:00:31.290663 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:00:31.307905 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:00:31.311906 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:00:31.313472 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:00:31.313472 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 18:00:31.313472 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:00:31.313472 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:00:31.313472 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:00:31.313472 ignition[982]: INFO : files: files passed May 14 18:00:31.313472 ignition[982]: INFO : Ignition finished successfully May 14 18:00:31.315230 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:00:31.319539 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:00:31.322158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:00:31.332193 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:00:31.332305 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:00:31.335535 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:00:31.336934 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:31.336934 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:31.339911 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:00:31.339178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:00:31.341440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:00:31.343397 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:00:31.404014 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:00:31.405132 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:00:31.406644 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:00:31.408614 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:00:31.410424 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:00:31.411313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:00:31.433789 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:00:31.436293 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:00:31.466279 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:31.467578 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:31.469639 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:00:31.471518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:00:31.471643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:00:31.474193 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:00:31.475346 systemd[1]: Stopped target basic.target - Basic System. May 14 18:00:31.477261 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:00:31.479235 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:00:31.481204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:00:31.483241 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:00:31.485345 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:00:31.487251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:00:31.489459 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:00:31.491260 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:00:31.493265 systemd[1]: Stopped target swap.target - Swaps. May 14 18:00:31.494771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:00:31.494902 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:00:31.497285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:31.499227 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:31.501280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:00:31.504441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:31.505743 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:00:31.505864 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:00:31.508670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:00:31.508786 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:00:31.510987 systemd[1]: Stopped target paths.target - Path Units. May 14 18:00:31.512651 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:00:31.513592 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:31.514973 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:00:31.516620 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:00:31.518341 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:00:31.518450 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:00:31.520663 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:00:31.520760 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:00:31.522413 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:00:31.522536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:00:31.524301 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:00:31.524423 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:00:31.526785 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:00:31.528350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:00:31.528510 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:31.537915 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:00:31.538821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:00:31.538950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:31.540864 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:00:31.540965 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:00:31.547249 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:00:31.547345 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:00:31.553337 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:00:31.554240 ignition[1038]: INFO : Ignition 2.21.0 May 14 18:00:31.554240 ignition[1038]: INFO : Stage: umount May 14 18:00:31.554240 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:00:31.554240 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:00:31.563167 ignition[1038]: INFO : umount: umount passed May 14 18:00:31.563167 ignition[1038]: INFO : Ignition finished successfully May 14 18:00:31.560145 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:00:31.560267 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:00:31.564428 systemd[1]: Stopped target network.target - Network. May 14 18:00:31.565279 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:00:31.565343 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:00:31.566649 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:00:31.566695 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:00:31.568404 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:00:31.568454 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:00:31.570249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:00:31.570299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:00:31.572130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:00:31.573890 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:00:31.584102 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:00:31.584216 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:00:31.587711 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:00:31.587966 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:00:31.588216 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:00:31.591558 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:00:31.592057 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:00:31.593994 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:00:31.594031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:31.597351 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:00:31.598710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:00:31.598779 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:00:31.600807 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:00:31.600850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:31.610890 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:00:31.610952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:00:31.612835 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:00:31.612887 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:31.615872 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:31.617916 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:00:31.617986 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:00:31.618270 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:00:31.620489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:00:31.623169 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:00:31.623254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:00:31.632957 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:00:31.634509 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:31.636167 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:00:31.636272 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:00:31.638460 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:00:31.638528 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:00:31.639780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:00:31.639813 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:31.641443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:00:31.641496 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:00:31.644231 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:00:31.644288 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:00:31.647012 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:00:31.647070 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:00:31.650697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:00:31.651751 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:00:31.651809 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:31.654682 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:00:31.654733 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:31.658234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:00:31.658277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:31.662888 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:00:31.662945 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:00:31.662982 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:00:31.670397 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:00:31.671617 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:00:31.673010 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:00:31.675560 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:00:31.704935 systemd[1]: Switching root. May 14 18:00:31.734564 systemd-journald[243]: Journal stopped May 14 18:00:32.468802 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 14 18:00:32.468856 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:00:32.468868 kernel: SELinux: policy capability open_perms=1 May 14 18:00:32.468877 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:00:32.468889 kernel: SELinux: policy capability always_check_network=0 May 14 18:00:32.468901 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:00:32.468914 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:00:32.468923 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:00:32.468932 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:00:32.468941 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:00:32.468949 kernel: audit: type=1403 audit(1747245631.888:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:00:32.468963 systemd[1]: Successfully loaded SELinux policy in 51.626ms. May 14 18:00:32.468982 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.105ms. May 14 18:00:32.468994 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:00:32.469005 systemd[1]: Detected virtualization kvm. May 14 18:00:32.469016 systemd[1]: Detected architecture arm64. May 14 18:00:32.469025 systemd[1]: Detected first boot. May 14 18:00:32.469035 systemd[1]: Initializing machine ID from VM UUID. May 14 18:00:32.469045 kernel: NET: Registered PF_VSOCK protocol family May 14 18:00:32.469054 zram_generator::config[1084]: No configuration found. May 14 18:00:32.469065 systemd[1]: Populated /etc with preset unit settings. May 14 18:00:32.469078 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:00:32.469088 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:00:32.469099 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:00:32.469109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:00:32.469119 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:00:32.469129 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:00:32.469139 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:00:32.469149 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:00:32.469159 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:00:32.469169 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:00:32.469179 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:00:32.469190 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:00:32.469200 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:00:32.469210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:00:32.469223 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:00:32.469234 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:00:32.469244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:00:32.469254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:00:32.469264 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 18:00:32.469276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:00:32.469286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:00:32.469297 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:00:32.469307 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:00:32.469317 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:00:32.469328 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:00:32.469342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:00:32.469355 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:00:32.469366 systemd[1]: Reached target slices.target - Slice Units. May 14 18:00:32.469386 systemd[1]: Reached target swap.target - Swaps. May 14 18:00:32.469397 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:00:32.469407 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:00:32.469417 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:00:32.469427 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:00:32.469437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:00:32.469447 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:00:32.469457 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:00:32.469467 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:00:32.469479 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:00:32.469489 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:00:32.469500 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:00:32.469510 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:00:32.469520 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:00:32.469531 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:00:32.469541 systemd[1]: Reached target machines.target - Containers. May 14 18:00:32.469551 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:00:32.469563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:32.469573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:00:32.469583 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:00:32.469594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:32.469603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:00:32.469614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:32.469624 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:00:32.469634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:32.469644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:00:32.469657 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:00:32.469667 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:00:32.469677 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:00:32.469687 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:00:32.469698 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:32.469715 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:00:32.469727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:00:32.469737 kernel: loop: module loaded May 14 18:00:32.469749 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:00:32.469759 kernel: fuse: init (API version 7.41) May 14 18:00:32.469768 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:00:32.469778 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:00:32.469790 kernel: ACPI: bus type drm_connector registered May 14 18:00:32.469800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:00:32.469811 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:00:32.469821 systemd[1]: Stopped verity-setup.service. May 14 18:00:32.469831 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:00:32.469841 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:00:32.469851 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:00:32.469861 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:00:32.469871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:00:32.469881 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:00:32.469920 systemd-journald[1160]: Collecting audit messages is disabled. May 14 18:00:32.469944 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:00:32.469955 systemd-journald[1160]: Journal started May 14 18:00:32.469977 systemd-journald[1160]: Runtime Journal (/run/log/journal/c5e3064ef98f40999dca5444f87b514e) is 6M, max 48.5M, 42.4M free. May 14 18:00:32.470012 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:00:32.259153 systemd[1]: Queued start job for default target multi-user.target. May 14 18:00:32.278402 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:00:32.278804 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:00:32.473946 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:00:32.474821 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:00:32.475001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:00:32.476530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:32.476702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:32.478134 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:00:32.478312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:00:32.479980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:32.480152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:32.481803 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:00:32.481986 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:00:32.483497 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:32.483661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:32.485063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:00:32.486594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:00:32.488169 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:00:32.489770 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:00:32.502617 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:00:32.505286 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:00:32.507612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:00:32.508820 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:00:32.508864 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:00:32.510880 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:00:32.520301 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:00:32.521570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:32.525557 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:00:32.527742 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:00:32.528904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:00:32.530022 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:00:32.531180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:00:32.535529 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:00:32.538495 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:00:32.541014 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:00:32.541428 systemd-journald[1160]: Time spent on flushing to /var/log/journal/c5e3064ef98f40999dca5444f87b514e is 17.753ms for 887 entries. May 14 18:00:32.541428 systemd-journald[1160]: System Journal (/var/log/journal/c5e3064ef98f40999dca5444f87b514e) is 8M, max 195.6M, 187.6M free. May 14 18:00:32.563570 systemd-journald[1160]: Received client request to flush runtime journal. May 14 18:00:32.563603 kernel: loop0: detected capacity change from 0 to 107312 May 14 18:00:32.550543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:00:32.553812 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:00:32.555191 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:00:32.556909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:00:32.560966 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:00:32.564912 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:00:32.568975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:00:32.577475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:00:32.583437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:00:32.603538 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:00:32.607503 kernel: loop1: detected capacity change from 0 to 189592 May 14 18:00:32.607751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:00:32.610327 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:00:32.631539 kernel: loop2: detected capacity change from 0 to 138376 May 14 18:00:32.636458 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 14 18:00:32.636470 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 14 18:00:32.640727 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:00:32.672397 kernel: loop3: detected capacity change from 0 to 107312 May 14 18:00:32.678412 kernel: loop4: detected capacity change from 0 to 189592 May 14 18:00:32.684414 kernel: loop5: detected capacity change from 0 to 138376 May 14 18:00:32.689904 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:00:32.690296 (sd-merge)[1225]: Merged extensions into '/usr'. May 14 18:00:32.695910 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:00:32.695937 systemd[1]: Reloading... May 14 18:00:32.750386 zram_generator::config[1253]: No configuration found. May 14 18:00:32.818261 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:00:32.846090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:32.908683 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:00:32.908826 systemd[1]: Reloading finished in 212 ms. May 14 18:00:32.930051 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:00:32.932464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:00:32.946837 systemd[1]: Starting ensure-sysext.service... May 14 18:00:32.948815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:00:32.959364 systemd[1]: Reload requested from client PID 1287 ('systemctl') (unit ensure-sysext.service)... May 14 18:00:32.959406 systemd[1]: Reloading... May 14 18:00:32.970029 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:00:32.970070 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:00:32.970310 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:00:32.970950 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:00:32.971770 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:00:32.972123 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. May 14 18:00:32.972236 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. May 14 18:00:32.975022 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:00:32.975521 systemd-tmpfiles[1288]: Skipping /boot May 14 18:00:32.985184 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:00:32.985312 systemd-tmpfiles[1288]: Skipping /boot May 14 18:00:33.001420 zram_generator::config[1315]: No configuration found. May 14 18:00:33.079156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:33.142737 systemd[1]: Reloading finished in 183 ms. May 14 18:00:33.162072 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:00:33.168148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:00:33.181502 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:33.184067 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:00:33.186523 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:00:33.193732 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:00:33.196529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:00:33.209114 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:00:33.220121 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:00:33.232734 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:00:33.234669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:33.236713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:33.240774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:33.244698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:33.245993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:33.246120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:33.246756 systemd-udevd[1361]: Using default interface naming scheme 'v255'. May 14 18:00:33.253484 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:00:33.256427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:00:33.257494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:00:33.259085 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:00:33.261184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:33.261363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:33.263062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:33.263212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:33.266910 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:33.267067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:33.270239 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:00:33.271945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:00:33.286406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:33.290904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:00:33.293478 augenrules[1411]: No rules May 14 18:00:33.294664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:00:33.309422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:00:33.310622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:33.310766 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:33.315665 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:00:33.318448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:00:33.319356 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:00:33.320954 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:33.322412 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:33.332025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:00:33.334149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:00:33.337024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:00:33.338427 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:00:33.352654 systemd[1]: Finished ensure-sysext.service. May 14 18:00:33.368078 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:00:33.368277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:00:33.372269 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 18:00:33.380675 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:33.381918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:00:33.384147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:00:33.385451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:00:33.385491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:00:33.385529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:00:33.385575 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:00:33.388590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:00:33.389845 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:00:33.397530 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:00:33.399424 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:00:33.414206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:00:33.420757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:00:33.428961 augenrules[1439]: /sbin/augenrules: No change May 14 18:00:33.448854 augenrules[1463]: No rules May 14 18:00:33.453993 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:33.454220 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:33.464176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:00:33.477536 systemd-resolved[1355]: Positive Trust Anchors: May 14 18:00:33.477552 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:00:33.477587 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:00:33.490857 systemd-resolved[1355]: Defaulting to hostname 'linux'. May 14 18:00:33.496120 systemd-networkd[1423]: lo: Link UP May 14 18:00:33.496128 systemd-networkd[1423]: lo: Gained carrier May 14 18:00:33.496988 systemd-networkd[1423]: Enumeration completed May 14 18:00:33.497096 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:00:33.497534 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:33.497544 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:00:33.498031 systemd-networkd[1423]: eth0: Link UP May 14 18:00:33.499597 systemd-networkd[1423]: eth0: Gained carrier May 14 18:00:33.499617 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:00:33.499913 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:00:33.503449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:00:33.504659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:00:33.505822 systemd[1]: Reached target network.target - Network. May 14 18:00:33.508500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:00:33.533449 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:00:33.536713 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:00:33.545659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:00:33.559915 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:00:33.561547 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:00:33.123772 systemd-resolved[1355]: Clock change detected. Flushing caches. May 14 18:00:33.127492 systemd-journald[1160]: Time jumped backwards, rotating. May 14 18:00:33.123790 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:00:33.123836 systemd-timesyncd[1442]: Initial clock synchronization to Wed 2025-05-14 18:00:33.123709 UTC. May 14 18:00:33.160457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:00:33.161907 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:00:33.163159 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:00:33.164471 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:00:33.165901 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:00:33.167139 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:00:33.168624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:00:33.169883 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:00:33.169918 systemd[1]: Reached target paths.target - Path Units. May 14 18:00:33.170864 systemd[1]: Reached target timers.target - Timer Units. May 14 18:00:33.172810 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:00:33.175107 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:00:33.178429 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:00:33.179868 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:00:33.181175 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:00:33.187064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:00:33.188714 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:00:33.190473 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:00:33.191774 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:00:33.192782 systemd[1]: Reached target basic.target - Basic System. May 14 18:00:33.193794 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:00:33.193823 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:00:33.194746 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:00:33.196749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:00:33.198606 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:00:33.200610 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:00:33.202554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:00:33.203661 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:00:33.205380 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:00:33.209302 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:00:33.211406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:00:33.212863 jq[1502]: false May 14 18:00:33.214371 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:00:33.227317 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:00:33.229331 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:00:33.229756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:00:33.231761 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:00:33.233150 extend-filesystems[1503]: Found loop3 May 14 18:00:33.236329 extend-filesystems[1503]: Found loop4 May 14 18:00:33.236329 extend-filesystems[1503]: Found loop5 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda May 14 18:00:33.236329 extend-filesystems[1503]: Found vda1 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda2 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda3 May 14 18:00:33.236329 extend-filesystems[1503]: Found usr May 14 18:00:33.236329 extend-filesystems[1503]: Found vda4 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda6 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda7 May 14 18:00:33.236329 extend-filesystems[1503]: Found vda9 May 14 18:00:33.236329 extend-filesystems[1503]: Checking size of /dev/vda9 May 14 18:00:33.234045 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:00:33.259395 extend-filesystems[1503]: Resized partition /dev/vda9 May 14 18:00:33.237962 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:00:33.260359 jq[1519]: true May 14 18:00:33.241567 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:00:33.241744 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:00:33.241971 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:00:33.242113 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:00:33.247169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:00:33.247524 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:00:33.263749 extend-filesystems[1532]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:00:33.268206 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:00:33.269168 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:00:33.278479 jq[1524]: true May 14 18:00:33.294233 tar[1523]: linux-arm64/helm May 14 18:00:33.303605 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (Power Button) May 14 18:00:33.304251 systemd-logind[1513]: New seat seat0. May 14 18:00:33.305822 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:00:33.318205 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:00:33.318906 dbus-daemon[1500]: [system] SELinux support is enabled May 14 18:00:33.325449 dbus-daemon[1500]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 18:00:33.341888 update_engine[1517]: I20250514 18:00:33.319782 1517 main.cc:92] Flatcar Update Engine starting May 14 18:00:33.341888 update_engine[1517]: I20250514 18:00:33.328123 1517 update_check_scheduler.cc:74] Next update check in 12m0s May 14 18:00:33.319238 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:00:33.324524 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:00:33.324770 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:00:33.326381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:00:33.326397 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:00:33.328509 systemd[1]: Started update-engine.service - Update Engine. May 14 18:00:33.333394 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:00:33.343273 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:00:33.343273 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:00:33.343273 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:00:33.362928 extend-filesystems[1503]: Resized filesystem in /dev/vda9 May 14 18:00:33.363831 bash[1556]: Updated "/home/core/.ssh/authorized_keys" May 14 18:00:33.346460 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:00:33.346684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:00:33.350865 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:00:33.360919 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:00:33.411355 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:00:33.479209 containerd[1527]: time="2025-05-14T18:00:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:00:33.482167 containerd[1527]: time="2025-05-14T18:00:33.482132477Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494556557Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.88µs" May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494605437Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494625397Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494809637Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494827037Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494852037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494908877Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.494920997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.495158117Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.495173597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.495212077Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:00:33.495301 containerd[1527]: time="2025-05-14T18:00:33.495221677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:00:33.495584 containerd[1527]: time="2025-05-14T18:00:33.495308437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:00:33.495584 containerd[1527]: time="2025-05-14T18:00:33.495526037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:00:33.495584 containerd[1527]: time="2025-05-14T18:00:33.495560157Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:00:33.495584 containerd[1527]: time="2025-05-14T18:00:33.495570997Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:00:33.495654 containerd[1527]: time="2025-05-14T18:00:33.495605877Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:00:33.495838 containerd[1527]: time="2025-05-14T18:00:33.495810197Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:00:33.495895 containerd[1527]: time="2025-05-14T18:00:33.495877597Z" level=info msg="metadata content store policy set" policy=shared May 14 18:00:33.499258 containerd[1527]: time="2025-05-14T18:00:33.499223837Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:00:33.499333 containerd[1527]: time="2025-05-14T18:00:33.499279037Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:00:33.499333 containerd[1527]: time="2025-05-14T18:00:33.499303157Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:00:33.499333 containerd[1527]: time="2025-05-14T18:00:33.499315557Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:00:33.499333 containerd[1527]: time="2025-05-14T18:00:33.499328677Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499342557Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499355317Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499367877Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499379957Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499391317Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499401997Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:00:33.499426 containerd[1527]: time="2025-05-14T18:00:33.499415197Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499557037Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499589517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499609037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499619877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499630237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499641397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499654277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499665797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499677117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499688197Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:00:33.499807 containerd[1527]: time="2025-05-14T18:00:33.499697837Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:00:33.500010 containerd[1527]: time="2025-05-14T18:00:33.499921677Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:00:33.500010 containerd[1527]: time="2025-05-14T18:00:33.499940717Z" level=info msg="Start snapshots syncer" May 14 18:00:33.500010 containerd[1527]: time="2025-05-14T18:00:33.499967477Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:00:33.500271 containerd[1527]: time="2025-05-14T18:00:33.500231077Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:00:33.500381 containerd[1527]: time="2025-05-14T18:00:33.500295237Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:00:33.500405 containerd[1527]: time="2025-05-14T18:00:33.500381597Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:00:33.500535 containerd[1527]: time="2025-05-14T18:00:33.500503837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500552197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500567837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500580277Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500592877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500603917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:00:33.500617 containerd[1527]: time="2025-05-14T18:00:33.500614077Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500642277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500654317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500664917Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500705317Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500720957Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500730757Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500742077Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500750237Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500763317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500774597Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500855037Z" level=info msg="runtime interface created" May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500860917Z" level=info msg="created NRI interface" May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500871797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:00:33.500846 containerd[1527]: time="2025-05-14T18:00:33.500883997Z" level=info msg="Connect containerd service" May 14 18:00:33.501265 containerd[1527]: time="2025-05-14T18:00:33.500911797Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:00:33.501778 containerd[1527]: time="2025-05-14T18:00:33.501746717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:00:33.607338 containerd[1527]: time="2025-05-14T18:00:33.606971117Z" level=info msg="Start subscribing containerd event" May 14 18:00:33.607466 containerd[1527]: time="2025-05-14T18:00:33.607385917Z" level=info msg="Start recovering state" May 14 18:00:33.608235 containerd[1527]: time="2025-05-14T18:00:33.607503157Z" level=info msg="Start event monitor" May 14 18:00:33.608235 containerd[1527]: time="2025-05-14T18:00:33.607536797Z" level=info msg="Start cni network conf syncer for default" May 14 18:00:33.608235 containerd[1527]: time="2025-05-14T18:00:33.607554837Z" level=info msg="Start streaming server" May 14 18:00:33.608235 containerd[1527]: time="2025-05-14T18:00:33.607646877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:00:33.608235 containerd[1527]: time="2025-05-14T18:00:33.607706077Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:00:33.608508 containerd[1527]: time="2025-05-14T18:00:33.608402437Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:00:33.609257 containerd[1527]: time="2025-05-14T18:00:33.609222997Z" level=info msg="runtime interface starting up..." May 14 18:00:33.609338 containerd[1527]: time="2025-05-14T18:00:33.609324917Z" level=info msg="starting plugins..." May 14 18:00:33.609408 containerd[1527]: time="2025-05-14T18:00:33.609397597Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:00:33.609683 containerd[1527]: time="2025-05-14T18:00:33.609666717Z" level=info msg="containerd successfully booted in 0.130985s" May 14 18:00:33.609772 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:00:33.670996 tar[1523]: linux-arm64/LICENSE May 14 18:00:33.671226 tar[1523]: linux-arm64/README.md May 14 18:00:33.685670 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:00:34.223393 systemd-networkd[1423]: eth0: Gained IPv6LL May 14 18:00:34.225862 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:00:34.227680 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:00:34.230958 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:00:34.233530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:34.242599 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:00:34.265237 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:00:34.267242 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:00:34.269079 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:00:34.272096 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:00:34.328762 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:00:34.349150 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:00:34.353681 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:00:34.376809 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:00:34.377052 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:00:34.379877 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:00:34.400340 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:00:34.403285 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:00:34.405640 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 18:00:34.407133 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:00:34.720282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:34.721901 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:00:34.724005 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:34.726481 systemd[1]: Startup finished in 2.119s (kernel) + 5.261s (initrd) + 3.329s (userspace) = 10.710s. May 14 18:00:35.170900 kubelet[1630]: E0514 18:00:35.170785 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:35.173094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:35.173249 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:35.173612 systemd[1]: kubelet.service: Consumed 812ms CPU time, 232.1M memory peak. May 14 18:00:39.583805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:00:39.585093 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:48682.service - OpenSSH per-connection server daemon (10.0.0.1:48682). May 14 18:00:39.665080 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 48682 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:39.667020 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:39.673171 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:00:39.674168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:00:39.679464 systemd-logind[1513]: New session 1 of user core. May 14 18:00:39.693309 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:00:39.696031 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:00:39.715536 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:00:39.717766 systemd-logind[1513]: New session c1 of user core. May 14 18:00:39.819902 systemd[1647]: Queued start job for default target default.target. May 14 18:00:39.831140 systemd[1647]: Created slice app.slice - User Application Slice. May 14 18:00:39.831172 systemd[1647]: Reached target paths.target - Paths. May 14 18:00:39.831233 systemd[1647]: Reached target timers.target - Timers. May 14 18:00:39.832521 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:00:39.842264 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:00:39.842324 systemd[1647]: Reached target sockets.target - Sockets. May 14 18:00:39.842366 systemd[1647]: Reached target basic.target - Basic System. May 14 18:00:39.842396 systemd[1647]: Reached target default.target - Main User Target. May 14 18:00:39.842424 systemd[1647]: Startup finished in 119ms. May 14 18:00:39.842706 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:00:39.844405 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:00:39.906526 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:48684.service - OpenSSH per-connection server daemon (10.0.0.1:48684). May 14 18:00:39.958924 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 48684 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:39.960217 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:39.964252 systemd-logind[1513]: New session 2 of user core. May 14 18:00:39.979348 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:00:40.030004 sshd[1660]: Connection closed by 10.0.0.1 port 48684 May 14 18:00:40.030437 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.042333 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:48684.service: Deactivated successfully. May 14 18:00:40.044521 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:00:40.046591 systemd-logind[1513]: Session 2 logged out. Waiting for processes to exit. May 14 18:00:40.048349 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). May 14 18:00:40.049439 systemd-logind[1513]: Removed session 2. May 14 18:00:40.096355 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:40.097558 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.102246 systemd-logind[1513]: New session 3 of user core. May 14 18:00:40.109338 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:00:40.158535 sshd[1668]: Connection closed by 10.0.0.1 port 48688 May 14 18:00:40.159061 sshd-session[1666]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.173674 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:48688.service: Deactivated successfully. May 14 18:00:40.176808 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:00:40.177508 systemd-logind[1513]: Session 3 logged out. Waiting for processes to exit. May 14 18:00:40.180766 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:48692.service - OpenSSH per-connection server daemon (10.0.0.1:48692). May 14 18:00:40.181259 systemd-logind[1513]: Removed session 3. May 14 18:00:40.228309 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 48692 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:40.229532 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.233900 systemd-logind[1513]: New session 4 of user core. May 14 18:00:40.251346 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:00:40.301513 sshd[1676]: Connection closed by 10.0.0.1 port 48692 May 14 18:00:40.301723 sshd-session[1674]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.312247 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:48692.service: Deactivated successfully. May 14 18:00:40.315376 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:00:40.316842 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. May 14 18:00:40.318153 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:48702.service - OpenSSH per-connection server daemon (10.0.0.1:48702). May 14 18:00:40.318955 systemd-logind[1513]: Removed session 4. May 14 18:00:40.374707 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 48702 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:40.375990 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.380740 systemd-logind[1513]: New session 5 of user core. May 14 18:00:40.389338 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:00:40.450731 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:00:40.450987 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:40.465866 sudo[1685]: pam_unix(sudo:session): session closed for user root May 14 18:00:40.467283 sshd[1684]: Connection closed by 10.0.0.1 port 48702 May 14 18:00:40.467699 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.478070 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:48702.service: Deactivated successfully. May 14 18:00:40.479381 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:00:40.480873 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. May 14 18:00:40.482126 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:48712.service - OpenSSH per-connection server daemon (10.0.0.1:48712). May 14 18:00:40.482867 systemd-logind[1513]: Removed session 5. May 14 18:00:40.533930 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 48712 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:40.535028 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.538695 systemd-logind[1513]: New session 6 of user core. May 14 18:00:40.550326 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:00:40.599528 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:00:40.599788 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:40.657049 sudo[1695]: pam_unix(sudo:session): session closed for user root May 14 18:00:40.661932 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:00:40.662228 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:40.670882 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:00:40.710327 augenrules[1717]: No rules May 14 18:00:40.711312 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:00:40.711538 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:00:40.714285 sudo[1694]: pam_unix(sudo:session): session closed for user root May 14 18:00:40.715741 sshd[1693]: Connection closed by 10.0.0.1 port 48712 May 14 18:00:40.715624 sshd-session[1691]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.726110 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:48712.service: Deactivated successfully. May 14 18:00:40.727517 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:00:40.730361 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. May 14 18:00:40.732423 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:48724.service - OpenSSH per-connection server daemon (10.0.0.1:48724). May 14 18:00:40.733008 systemd-logind[1513]: Removed session 6. May 14 18:00:40.785917 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 48724 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:00:40.786994 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.791244 systemd-logind[1513]: New session 7 of user core. May 14 18:00:40.797314 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:00:40.846328 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:00:40.846599 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:00:41.201028 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:00:41.231549 (dockerd)[1749]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:00:41.493928 dockerd[1749]: time="2025-05-14T18:00:41.493809037Z" level=info msg="Starting up" May 14 18:00:41.495143 dockerd[1749]: time="2025-05-14T18:00:41.495106677Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:00:41.529682 dockerd[1749]: time="2025-05-14T18:00:41.529492917Z" level=info msg="Loading containers: start." May 14 18:00:41.540879 kernel: Initializing XFRM netlink socket May 14 18:00:41.729256 systemd-networkd[1423]: docker0: Link UP May 14 18:00:41.732893 dockerd[1749]: time="2025-05-14T18:00:41.732852597Z" level=info msg="Loading containers: done." May 14 18:00:41.745978 dockerd[1749]: time="2025-05-14T18:00:41.745882957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:00:41.745978 dockerd[1749]: time="2025-05-14T18:00:41.745958637Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:00:41.746085 dockerd[1749]: time="2025-05-14T18:00:41.746054597Z" level=info msg="Initializing buildkit" May 14 18:00:41.766100 dockerd[1749]: time="2025-05-14T18:00:41.766047797Z" level=info msg="Completed buildkit initialization" May 14 18:00:41.770881 dockerd[1749]: time="2025-05-14T18:00:41.770844237Z" level=info msg="Daemon has completed initialization" May 14 18:00:41.770951 dockerd[1749]: time="2025-05-14T18:00:41.770907357Z" level=info msg="API listen on /run/docker.sock" May 14 18:00:41.771076 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:00:42.558267 containerd[1527]: time="2025-05-14T18:00:42.558230837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:00:43.506876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752782114.mount: Deactivated successfully. May 14 18:00:44.541044 containerd[1527]: time="2025-05-14T18:00:44.540983357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:44.541595 containerd[1527]: time="2025-05-14T18:00:44.541560117Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 14 18:00:44.542257 containerd[1527]: time="2025-05-14T18:00:44.542231437Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:44.544558 containerd[1527]: time="2025-05-14T18:00:44.544527037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:44.545892 containerd[1527]: time="2025-05-14T18:00:44.545776637Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.98750608s" May 14 18:00:44.545892 containerd[1527]: time="2025-05-14T18:00:44.545811557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 18:00:44.546579 containerd[1527]: time="2025-05-14T18:00:44.546547517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:00:45.423679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:00:45.425085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:45.541129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:45.544620 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:45.576481 kubelet[2018]: E0514 18:00:45.576409 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:45.581360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:45.581494 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:45.583252 systemd[1]: kubelet.service: Consumed 123ms CPU time, 94.5M memory peak. May 14 18:00:45.995211 containerd[1527]: time="2025-05-14T18:00:45.994949157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.996422 containerd[1527]: time="2025-05-14T18:00:45.996390157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 14 18:00:45.998215 containerd[1527]: time="2025-05-14T18:00:45.997543837Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.000075 containerd[1527]: time="2025-05-14T18:00:46.000049237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:46.001336 containerd[1527]: time="2025-05-14T18:00:46.001285037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.45470068s" May 14 18:00:46.001405 containerd[1527]: time="2025-05-14T18:00:46.001335237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 18:00:46.001783 containerd[1527]: time="2025-05-14T18:00:46.001756757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:00:47.358233 containerd[1527]: time="2025-05-14T18:00:47.358159037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:47.358683 containerd[1527]: time="2025-05-14T18:00:47.358630277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 14 18:00:47.359699 containerd[1527]: time="2025-05-14T18:00:47.359644557Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:47.361983 containerd[1527]: time="2025-05-14T18:00:47.361928277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:47.363007 containerd[1527]: time="2025-05-14T18:00:47.362970757Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.36118112s" May 14 18:00:47.363123 containerd[1527]: time="2025-05-14T18:00:47.363008397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 18:00:47.363573 containerd[1527]: time="2025-05-14T18:00:47.363545277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:00:48.467477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659381858.mount: Deactivated successfully. May 14 18:00:48.667307 containerd[1527]: time="2025-05-14T18:00:48.667247997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.667729 containerd[1527]: time="2025-05-14T18:00:48.667691677Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 14 18:00:48.668460 containerd[1527]: time="2025-05-14T18:00:48.668428117Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.670165 containerd[1527]: time="2025-05-14T18:00:48.670116157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.670770 containerd[1527]: time="2025-05-14T18:00:48.670613837Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.3070328s" May 14 18:00:48.670770 containerd[1527]: time="2025-05-14T18:00:48.670648517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 18:00:48.671330 containerd[1527]: time="2025-05-14T18:00:48.671291877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:00:49.347368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178199910.mount: Deactivated successfully. May 14 18:00:49.978974 containerd[1527]: time="2025-05-14T18:00:49.978924397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:49.980423 containerd[1527]: time="2025-05-14T18:00:49.980386317Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 18:00:49.981590 containerd[1527]: time="2025-05-14T18:00:49.981246797Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:49.984279 containerd[1527]: time="2025-05-14T18:00:49.984242437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:49.985456 containerd[1527]: time="2025-05-14T18:00:49.985412237Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.31408848s" May 14 18:00:49.985456 containerd[1527]: time="2025-05-14T18:00:49.985454677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 18:00:49.986142 containerd[1527]: time="2025-05-14T18:00:49.985906917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:00:50.666156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086937608.mount: Deactivated successfully. May 14 18:00:50.671742 containerd[1527]: time="2025-05-14T18:00:50.671706357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:50.672536 containerd[1527]: time="2025-05-14T18:00:50.672506877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 18:00:50.673609 containerd[1527]: time="2025-05-14T18:00:50.673559477Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:50.675891 containerd[1527]: time="2025-05-14T18:00:50.675851357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:00:50.676387 containerd[1527]: time="2025-05-14T18:00:50.676265397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 690.32436ms" May 14 18:00:50.676387 containerd[1527]: time="2025-05-14T18:00:50.676293117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 18:00:50.676935 containerd[1527]: time="2025-05-14T18:00:50.676680797Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:00:51.335588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118455499.mount: Deactivated successfully. May 14 18:00:52.987214 containerd[1527]: time="2025-05-14T18:00:52.987012397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:52.988554 containerd[1527]: time="2025-05-14T18:00:52.988528877Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 14 18:00:52.989501 containerd[1527]: time="2025-05-14T18:00:52.989466477Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:52.992845 containerd[1527]: time="2025-05-14T18:00:52.992806717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:52.994592 containerd[1527]: time="2025-05-14T18:00:52.994554397Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.31783492s" May 14 18:00:52.994630 containerd[1527]: time="2025-05-14T18:00:52.994594917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 18:00:55.831898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:00:55.833370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:55.962501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:55.966585 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:00:55.999766 kubelet[2175]: E0514 18:00:55.999724 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:00:56.002152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:00:56.002302 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:00:56.002587 systemd[1]: kubelet.service: Consumed 122ms CPU time, 94.7M memory peak. May 14 18:00:56.194199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:56.194335 systemd[1]: kubelet.service: Consumed 122ms CPU time, 94.7M memory peak. May 14 18:00:56.196266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:56.218003 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... May 14 18:00:56.218020 systemd[1]: Reloading... May 14 18:00:56.289214 zram_generator::config[2238]: No configuration found. May 14 18:00:56.419480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:00:56.501871 systemd[1]: Reloading finished in 283 ms. May 14 18:00:56.566669 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:00:56.566746 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:00:56.568223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:56.568275 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.5M memory peak. May 14 18:00:56.569920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:00:56.670011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:00:56.687537 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:00:56.721511 kubelet[2280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:56.721511 kubelet[2280]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:00:56.721511 kubelet[2280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:00:56.721842 kubelet[2280]: I0514 18:00:56.721733 2280 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:00:57.291751 kubelet[2280]: I0514 18:00:57.291214 2280 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:00:57.291751 kubelet[2280]: I0514 18:00:57.291247 2280 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:00:57.291751 kubelet[2280]: I0514 18:00:57.291623 2280 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:00:57.331611 kubelet[2280]: E0514 18:00:57.331552 2280 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:57.331943 kubelet[2280]: I0514 18:00:57.331918 2280 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:00:57.339526 kubelet[2280]: I0514 18:00:57.339506 2280 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:00:57.343117 kubelet[2280]: I0514 18:00:57.343088 2280 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:00:57.344002 kubelet[2280]: I0514 18:00:57.343960 2280 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:00:57.344155 kubelet[2280]: I0514 18:00:57.344112 2280 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:00:57.344332 kubelet[2280]: I0514 18:00:57.344146 2280 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:00:57.344495 kubelet[2280]: I0514 18:00:57.344464 2280 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:00:57.344495 kubelet[2280]: I0514 18:00:57.344477 2280 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:00:57.344668 kubelet[2280]: I0514 18:00:57.344649 2280 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:57.349769 kubelet[2280]: I0514 18:00:57.349744 2280 kubelet.go:408] "Attempting to sync node with API server" May 14 18:00:57.349829 kubelet[2280]: I0514 18:00:57.349775 2280 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:00:57.349829 kubelet[2280]: I0514 18:00:57.349800 2280 kubelet.go:314] "Adding apiserver pod source" May 14 18:00:57.349829 kubelet[2280]: I0514 18:00:57.349809 2280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:00:57.352096 kubelet[2280]: I0514 18:00:57.351826 2280 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:00:57.352796 kubelet[2280]: W0514 18:00:57.352597 2280 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused May 14 18:00:57.352796 kubelet[2280]: E0514 18:00:57.352658 2280 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:57.354074 kubelet[2280]: I0514 18:00:57.354038 2280 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:00:57.355368 kubelet[2280]: W0514 18:00:57.355325 2280 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:00:57.356210 kubelet[2280]: W0514 18:00:57.352734 2280 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused May 14 18:00:57.356210 kubelet[2280]: E0514 18:00:57.355483 2280 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:57.356210 kubelet[2280]: I0514 18:00:57.356030 2280 server.go:1269] "Started kubelet" May 14 18:00:57.357823 kubelet[2280]: I0514 18:00:57.357786 2280 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:00:57.359771 kubelet[2280]: I0514 18:00:57.359739 2280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:00:57.360076 kubelet[2280]: I0514 18:00:57.360045 2280 server.go:460] "Adding debug handlers to kubelet server" May 14 18:00:57.360220 kubelet[2280]: I0514 18:00:57.358453 2280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:00:57.360508 kubelet[2280]: I0514 18:00:57.360488 2280 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:00:57.361331 kubelet[2280]: I0514 18:00:57.361306 2280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:00:57.361432 kubelet[2280]: I0514 18:00:57.361402 2280 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:00:57.361539 kubelet[2280]: I0514 18:00:57.361520 2280 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:00:57.361630 kubelet[2280]: I0514 18:00:57.361613 2280 reconciler.go:26] "Reconciler: start to sync state" May 14 18:00:57.362031 kubelet[2280]: W0514 18:00:57.361986 2280 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused May 14 18:00:57.362131 kubelet[2280]: E0514 18:00:57.362047 2280 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:57.362368 kubelet[2280]: E0514 18:00:57.361088 2280 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f76ab060c7745 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:00:57.356007237 +0000 UTC m=+0.665557521,LastTimestamp:2025-05-14 18:00:57.356007237 +0000 UTC m=+0.665557521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:00:57.362602 kubelet[2280]: I0514 18:00:57.362577 2280 factory.go:221] Registration of the systemd container factory successfully May 14 18:00:57.362687 kubelet[2280]: I0514 18:00:57.362667 2280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:00:57.363224 kubelet[2280]: E0514 18:00:57.363179 2280 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:00:57.363343 kubelet[2280]: E0514 18:00:57.363318 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" May 14 18:00:57.364051 kubelet[2280]: I0514 18:00:57.364028 2280 factory.go:221] Registration of the containerd container factory successfully May 14 18:00:57.375261 kubelet[2280]: I0514 18:00:57.375231 2280 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:00:57.375261 kubelet[2280]: I0514 18:00:57.375248 2280 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:00:57.375261 kubelet[2280]: I0514 18:00:57.375262 2280 state_mem.go:36] "Initialized new in-memory state store" May 14 18:00:57.375969 kubelet[2280]: I0514 18:00:57.375909 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:00:57.377229 kubelet[2280]: I0514 18:00:57.377199 2280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:00:57.377229 kubelet[2280]: I0514 18:00:57.377222 2280 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:00:57.377318 kubelet[2280]: I0514 18:00:57.377240 2280 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:00:57.379060 kubelet[2280]: E0514 18:00:57.379024 2280 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:00:57.444348 kubelet[2280]: I0514 18:00:57.444301 2280 policy_none.go:49] "None policy: Start" May 14 18:00:57.445024 kubelet[2280]: I0514 18:00:57.445007 2280 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:00:57.445090 kubelet[2280]: I0514 18:00:57.445032 2280 state_mem.go:35] "Initializing new in-memory state store" May 14 18:00:57.445090 kubelet[2280]: W0514 18:00:57.445011 2280 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused May 14 18:00:57.445090 kubelet[2280]: E0514 18:00:57.445071 2280 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:57.453232 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:00:57.464318 kubelet[2280]: E0514 18:00:57.464285 2280 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:00:57.475012 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:00:57.478700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:00:57.479277 kubelet[2280]: E0514 18:00:57.479258 2280 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:00:57.497234 kubelet[2280]: I0514 18:00:57.497214 2280 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:00:57.497569 kubelet[2280]: I0514 18:00:57.497550 2280 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:00:57.497678 kubelet[2280]: I0514 18:00:57.497573 2280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:00:57.498203 kubelet[2280]: I0514 18:00:57.498148 2280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:00:57.499065 kubelet[2280]: E0514 18:00:57.499041 2280 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:00:57.564080 kubelet[2280]: E0514 18:00:57.563977 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" May 14 18:00:57.599228 kubelet[2280]: I0514 18:00:57.599203 2280 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:00:57.599717 kubelet[2280]: E0514 18:00:57.599687 2280 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" May 14 18:00:57.688176 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 18:00:57.707405 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 18:00:57.727815 systemd[1]: Created slice kubepods-burstable-podfe9d800e8156a66d6668136d896fa1ce.slice - libcontainer container kubepods-burstable-podfe9d800e8156a66d6668136d896fa1ce.slice. May 14 18:00:57.763567 kubelet[2280]: I0514 18:00:57.763524 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:00:57.763567 kubelet[2280]: I0514 18:00:57.763564 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:57.763873 kubelet[2280]: I0514 18:00:57.763584 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:57.763873 kubelet[2280]: I0514 18:00:57.763602 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:57.763873 kubelet[2280]: I0514 18:00:57.763617 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:57.763873 kubelet[2280]: I0514 18:00:57.763632 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:00:57.763873 kubelet[2280]: I0514 18:00:57.763646 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:57.763973 kubelet[2280]: I0514 18:00:57.763659 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:57.763973 kubelet[2280]: I0514 18:00:57.763675 2280 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:00:57.801746 kubelet[2280]: I0514 18:00:57.801716 2280 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:00:57.802066 kubelet[2280]: E0514 18:00:57.802042 2280 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" May 14 18:00:57.965520 kubelet[2280]: E0514 18:00:57.965475 2280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" May 14 18:00:58.006954 containerd[1527]: time="2025-05-14T18:00:58.006900637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 18:00:58.026717 containerd[1527]: time="2025-05-14T18:00:58.026675957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 18:00:58.028147 containerd[1527]: time="2025-05-14T18:00:58.028111677Z" level=info msg="connecting to shim 30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2" address="unix:///run/containerd/s/5e1cb47a9c88346190d2070d9847bd53bfa0240f7df79285a21a8d1bce76742f" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:58.030396 containerd[1527]: time="2025-05-14T18:00:58.030356397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fe9d800e8156a66d6668136d896fa1ce,Namespace:kube-system,Attempt:0,}" May 14 18:00:58.057436 systemd[1]: Started cri-containerd-30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2.scope - libcontainer container 30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2. May 14 18:00:58.062531 containerd[1527]: time="2025-05-14T18:00:58.062331637Z" level=info msg="connecting to shim 5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec" address="unix:///run/containerd/s/c70d6e36af9c4c2942b2afa36af83456d65290bd4e91f5c725f210be26b08099" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:58.077052 containerd[1527]: time="2025-05-14T18:00:58.076927997Z" level=info msg="connecting to shim 9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a" address="unix:///run/containerd/s/60c72a7af26c7bc06c61f93337c30f3e15278c07a0d0ad9b0f0a1d8e2b3221e5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:58.094368 systemd[1]: Started cri-containerd-5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec.scope - libcontainer container 5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec. May 14 18:00:58.102495 systemd[1]: Started cri-containerd-9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a.scope - libcontainer container 9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a. May 14 18:00:58.115067 containerd[1527]: time="2025-05-14T18:00:58.115027837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2\"" May 14 18:00:58.122054 containerd[1527]: time="2025-05-14T18:00:58.122021917Z" level=info msg="CreateContainer within sandbox \"30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:00:58.142490 containerd[1527]: time="2025-05-14T18:00:58.142452237Z" level=info msg="Container 59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:58.145995 containerd[1527]: time="2025-05-14T18:00:58.145953157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fe9d800e8156a66d6668136d896fa1ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a\"" May 14 18:00:58.149504 containerd[1527]: time="2025-05-14T18:00:58.149475677Z" level=info msg="CreateContainer within sandbox \"9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:00:58.150991 containerd[1527]: time="2025-05-14T18:00:58.150884997Z" level=info msg="CreateContainer within sandbox \"30c25c7d58b9c2fc84fdba1503c5f723769b273771e05e9eac137b1ae4c1cbf2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce\"" May 14 18:00:58.151425 containerd[1527]: time="2025-05-14T18:00:58.151392557Z" level=info msg="StartContainer for \"59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce\"" May 14 18:00:58.152443 containerd[1527]: time="2025-05-14T18:00:58.152416997Z" level=info msg="connecting to shim 59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce" address="unix:///run/containerd/s/5e1cb47a9c88346190d2070d9847bd53bfa0240f7df79285a21a8d1bce76742f" protocol=ttrpc version=3 May 14 18:00:58.154730 containerd[1527]: time="2025-05-14T18:00:58.154697997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec\"" May 14 18:00:58.157647 containerd[1527]: time="2025-05-14T18:00:58.157595397Z" level=info msg="CreateContainer within sandbox \"5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:00:58.164210 containerd[1527]: time="2025-05-14T18:00:58.164158037Z" level=info msg="Container f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:58.169865 containerd[1527]: time="2025-05-14T18:00:58.169808117Z" level=info msg="Container 84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:58.177340 systemd[1]: Started cri-containerd-59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce.scope - libcontainer container 59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce. May 14 18:00:58.177733 containerd[1527]: time="2025-05-14T18:00:58.177703277Z" level=info msg="CreateContainer within sandbox \"9e2dd5dd41eb421c4eab01b48ae07c77f3c12f27ca4a9dca89bdf2014231ad3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d\"" May 14 18:00:58.179063 containerd[1527]: time="2025-05-14T18:00:58.179032317Z" level=info msg="StartContainer for \"f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d\"" May 14 18:00:58.180113 containerd[1527]: time="2025-05-14T18:00:58.180076797Z" level=info msg="connecting to shim f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d" address="unix:///run/containerd/s/60c72a7af26c7bc06c61f93337c30f3e15278c07a0d0ad9b0f0a1d8e2b3221e5" protocol=ttrpc version=3 May 14 18:00:58.181616 containerd[1527]: time="2025-05-14T18:00:58.181551157Z" level=info msg="CreateContainer within sandbox \"5e5ffeac5d563ab24f33bb7ba918ca13b134b654a2775a1ba709a116c1a771ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496\"" May 14 18:00:58.182051 containerd[1527]: time="2025-05-14T18:00:58.182025877Z" level=info msg="StartContainer for \"84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496\"" May 14 18:00:58.183027 containerd[1527]: time="2025-05-14T18:00:58.182996997Z" level=info msg="connecting to shim 84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496" address="unix:///run/containerd/s/c70d6e36af9c4c2942b2afa36af83456d65290bd4e91f5c725f210be26b08099" protocol=ttrpc version=3 May 14 18:00:58.199344 systemd[1]: Started cri-containerd-f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d.scope - libcontainer container f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d. May 14 18:00:58.201996 systemd[1]: Started cri-containerd-84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496.scope - libcontainer container 84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496. May 14 18:00:58.208846 kubelet[2280]: I0514 18:00:58.208594 2280 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:00:58.208927 kubelet[2280]: E0514 18:00:58.208907 2280 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" May 14 18:00:58.226367 containerd[1527]: time="2025-05-14T18:00:58.226017677Z" level=info msg="StartContainer for \"59f7e250406c49a379f9f2f941fef18e78a5d87a6eb07cf3d839962c258902ce\" returns successfully" May 14 18:00:58.274314 containerd[1527]: time="2025-05-14T18:00:58.273511957Z" level=info msg="StartContainer for \"f18b211bdbda82265f89f78f22410055a143d8aecfde9cc13c771333bd1d3d8d\" returns successfully" May 14 18:00:58.409379 containerd[1527]: time="2025-05-14T18:00:58.409310757Z" level=info msg="StartContainer for \"84fb025d181c1c9732d954269adf14e024372eb907d96a9e0f159f2971941496\" returns successfully" May 14 18:00:58.439857 kubelet[2280]: W0514 18:00:58.439768 2280 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused May 14 18:00:58.439857 kubelet[2280]: E0514 18:00:58.439845 2280 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" May 14 18:00:59.011012 kubelet[2280]: I0514 18:00:59.010979 2280 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:01:00.033467 kubelet[2280]: E0514 18:01:00.033411 2280 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:01:00.102047 kubelet[2280]: I0514 18:01:00.101755 2280 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:01:00.102047 kubelet[2280]: E0514 18:01:00.101796 2280 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:01:00.144839 kubelet[2280]: E0514 18:01:00.144726 2280 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f76ab060c7745 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:00:57.356007237 +0000 UTC m=+0.665557521,LastTimestamp:2025-05-14 18:00:57.356007237 +0000 UTC m=+0.665557521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:01:00.353529 kubelet[2280]: I0514 18:01:00.353120 2280 apiserver.go:52] "Watching apiserver" May 14 18:01:00.362323 kubelet[2280]: I0514 18:01:00.362278 2280 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:01:02.125033 systemd[1]: Reload requested from client PID 2560 ('systemctl') (unit session-7.scope)... May 14 18:01:02.125049 systemd[1]: Reloading... May 14 18:01:02.184411 zram_generator::config[2606]: No configuration found. May 14 18:01:02.246820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:01:02.343018 systemd[1]: Reloading finished in 217 ms. May 14 18:01:02.376243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:02.387117 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:01:02.387459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:02.387524 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 115.8M memory peak. May 14 18:01:02.389321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:01:02.518678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:01:02.524054 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:01:02.568200 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:02.568200 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:01:02.568200 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:01:02.568520 kubelet[2645]: I0514 18:01:02.568205 2645 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:01:02.575660 kubelet[2645]: I0514 18:01:02.575129 2645 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:01:02.575660 kubelet[2645]: I0514 18:01:02.575158 2645 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:01:02.575660 kubelet[2645]: I0514 18:01:02.575381 2645 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:01:02.577238 kubelet[2645]: I0514 18:01:02.577215 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:01:02.579622 kubelet[2645]: I0514 18:01:02.579583 2645 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:01:02.583493 kubelet[2645]: I0514 18:01:02.583461 2645 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:01:02.587090 kubelet[2645]: I0514 18:01:02.586126 2645 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:01:02.587090 kubelet[2645]: I0514 18:01:02.586265 2645 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:01:02.587090 kubelet[2645]: I0514 18:01:02.586351 2645 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:01:02.587090 kubelet[2645]: I0514 18:01:02.586378 2645 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586761 2645 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586772 2645 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586808 2645 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586915 2645 kubelet.go:408] "Attempting to sync node with API server" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586928 2645 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586947 2645 kubelet.go:314] "Adding apiserver pod source" May 14 18:01:02.587274 kubelet[2645]: I0514 18:01:02.586957 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:01:02.588226 kubelet[2645]: I0514 18:01:02.588121 2645 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:01:02.589094 kubelet[2645]: I0514 18:01:02.589050 2645 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:01:02.590103 kubelet[2645]: I0514 18:01:02.589982 2645 server.go:1269] "Started kubelet" May 14 18:01:02.592200 kubelet[2645]: I0514 18:01:02.590695 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:01:02.593682 kubelet[2645]: I0514 18:01:02.592887 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:01:02.593682 kubelet[2645]: I0514 18:01:02.593093 2645 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:01:02.594192 kubelet[2645]: I0514 18:01:02.593891 2645 server.go:460] "Adding debug handlers to kubelet server" May 14 18:01:02.596239 kubelet[2645]: E0514 18:01:02.596212 2645 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:01:02.596328 kubelet[2645]: I0514 18:01:02.596305 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:01:02.604055 kubelet[2645]: I0514 18:01:02.603965 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:01:02.605803 kubelet[2645]: I0514 18:01:02.605779 2645 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:01:02.607593 kubelet[2645]: E0514 18:01:02.607559 2645 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:01:02.607690 kubelet[2645]: I0514 18:01:02.607667 2645 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:01:02.607815 kubelet[2645]: I0514 18:01:02.607802 2645 reconciler.go:26] "Reconciler: start to sync state" May 14 18:01:02.609177 kubelet[2645]: I0514 18:01:02.609128 2645 factory.go:221] Registration of the systemd container factory successfully May 14 18:01:02.610032 kubelet[2645]: I0514 18:01:02.610003 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:01:02.610260 kubelet[2645]: I0514 18:01:02.610223 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:01:02.611385 kubelet[2645]: I0514 18:01:02.611363 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:01:02.611479 kubelet[2645]: I0514 18:01:02.611469 2645 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:01:02.611598 kubelet[2645]: I0514 18:01:02.611586 2645 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:01:02.611693 kubelet[2645]: E0514 18:01:02.611671 2645 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:01:02.612140 kubelet[2645]: I0514 18:01:02.612117 2645 factory.go:221] Registration of the containerd container factory successfully May 14 18:01:02.643438 kubelet[2645]: I0514 18:01:02.643413 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:01:02.643901 kubelet[2645]: I0514 18:01:02.643581 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:01:02.643901 kubelet[2645]: I0514 18:01:02.643606 2645 state_mem.go:36] "Initialized new in-memory state store" May 14 18:01:02.643901 kubelet[2645]: I0514 18:01:02.643748 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:01:02.643901 kubelet[2645]: I0514 18:01:02.643762 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:01:02.643901 kubelet[2645]: I0514 18:01:02.643782 2645 policy_none.go:49] "None policy: Start" May 14 18:01:02.644371 kubelet[2645]: I0514 18:01:02.644357 2645 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:01:02.644466 kubelet[2645]: I0514 18:01:02.644455 2645 state_mem.go:35] "Initializing new in-memory state store" May 14 18:01:02.644704 kubelet[2645]: I0514 18:01:02.644677 2645 state_mem.go:75] "Updated machine memory state" May 14 18:01:02.649612 kubelet[2645]: I0514 18:01:02.649590 2645 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:01:02.649833 kubelet[2645]: I0514 18:01:02.649816 2645 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:01:02.649925 kubelet[2645]: I0514 18:01:02.649896 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:01:02.650157 kubelet[2645]: I0514 18:01:02.650135 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:01:02.751763 kubelet[2645]: I0514 18:01:02.751726 2645 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:01:02.758146 kubelet[2645]: I0514 18:01:02.758115 2645 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 18:01:02.758246 kubelet[2645]: I0514 18:01:02.758218 2645 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:01:02.908494 kubelet[2645]: I0514 18:01:02.908377 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.908494 kubelet[2645]: I0514 18:01:02.908428 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.908494 kubelet[2645]: I0514 18:01:02.908450 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe9d800e8156a66d6668136d896fa1ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fe9d800e8156a66d6668136d896fa1ce\") " pod="kube-system/kube-apiserver-localhost" May 14 18:01:02.908494 kubelet[2645]: I0514 18:01:02.908470 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.908494 kubelet[2645]: I0514 18:01:02.908487 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.908674 kubelet[2645]: I0514 18:01:02.908503 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.908674 kubelet[2645]: I0514 18:01:02.908520 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.908674 kubelet[2645]: I0514 18:01:02.908535 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:01:02.908674 kubelet[2645]: I0514 18:01:02.908552 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:01:03.587235 kubelet[2645]: I0514 18:01:03.587208 2645 apiserver.go:52] "Watching apiserver" May 14 18:01:03.608204 kubelet[2645]: I0514 18:01:03.608139 2645 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:01:03.634202 kubelet[2645]: E0514 18:01:03.634156 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:01:03.660692 kubelet[2645]: I0514 18:01:03.660528 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6605095570000001 podStartE2EDuration="1.660509557s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.651937557 +0000 UTC m=+1.124639481" watchObservedRunningTime="2025-05-14 18:01:03.660509557 +0000 UTC m=+1.133211481" May 14 18:01:03.660858 kubelet[2645]: I0514 18:01:03.660835 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.660828477 podStartE2EDuration="1.660828477s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.660747717 +0000 UTC m=+1.133449641" watchObservedRunningTime="2025-05-14 18:01:03.660828477 +0000 UTC m=+1.133530441" May 14 18:01:03.680329 kubelet[2645]: I0514 18:01:03.680265 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6802485969999998 podStartE2EDuration="1.680248597s" podCreationTimestamp="2025-05-14 18:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:03.669665877 +0000 UTC m=+1.142367801" watchObservedRunningTime="2025-05-14 18:01:03.680248597 +0000 UTC m=+1.152950521" May 14 18:01:06.971190 kubelet[2645]: I0514 18:01:06.971143 2645 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:01:06.971901 containerd[1527]: time="2025-05-14T18:01:06.971848521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:01:06.972119 kubelet[2645]: I0514 18:01:06.972104 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:01:07.347319 sudo[1729]: pam_unix(sudo:session): session closed for user root May 14 18:01:07.348380 sshd[1728]: Connection closed by 10.0.0.1 port 48724 May 14 18:01:07.348859 sshd-session[1726]: pam_unix(sshd:session): session closed for user core May 14 18:01:07.352719 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:48724.service: Deactivated successfully. May 14 18:01:07.355277 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:01:07.355499 systemd[1]: session-7.scope: Consumed 5.016s CPU time, 229.6M memory peak. May 14 18:01:07.356730 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. May 14 18:01:07.357989 systemd-logind[1513]: Removed session 7. May 14 18:01:07.877913 systemd[1]: Created slice kubepods-besteffort-pod0412f886_9993_4369_bff8_ba68874542fe.slice - libcontainer container kubepods-besteffort-pod0412f886_9993_4369_bff8_ba68874542fe.slice. May 14 18:01:07.940262 kubelet[2645]: I0514 18:01:07.940213 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0412f886-9993-4369-bff8-ba68874542fe-kube-proxy\") pod \"kube-proxy-vwdgq\" (UID: \"0412f886-9993-4369-bff8-ba68874542fe\") " pod="kube-system/kube-proxy-vwdgq" May 14 18:01:07.940262 kubelet[2645]: I0514 18:01:07.940264 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0412f886-9993-4369-bff8-ba68874542fe-xtables-lock\") pod \"kube-proxy-vwdgq\" (UID: \"0412f886-9993-4369-bff8-ba68874542fe\") " pod="kube-system/kube-proxy-vwdgq" May 14 18:01:07.940411 kubelet[2645]: I0514 18:01:07.940282 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0412f886-9993-4369-bff8-ba68874542fe-lib-modules\") pod \"kube-proxy-vwdgq\" (UID: \"0412f886-9993-4369-bff8-ba68874542fe\") " pod="kube-system/kube-proxy-vwdgq" May 14 18:01:07.940411 kubelet[2645]: I0514 18:01:07.940299 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5czx\" (UniqueName: \"kubernetes.io/projected/0412f886-9993-4369-bff8-ba68874542fe-kube-api-access-q5czx\") pod \"kube-proxy-vwdgq\" (UID: \"0412f886-9993-4369-bff8-ba68874542fe\") " pod="kube-system/kube-proxy-vwdgq" May 14 18:01:08.152286 systemd[1]: Created slice kubepods-besteffort-poddd74a5b9_5eed_44a2_b3ee_8cf5965b5479.slice - libcontainer container kubepods-besteffort-poddd74a5b9_5eed_44a2_b3ee_8cf5965b5479.slice. May 14 18:01:08.193762 containerd[1527]: time="2025-05-14T18:01:08.193722402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vwdgq,Uid:0412f886-9993-4369-bff8-ba68874542fe,Namespace:kube-system,Attempt:0,}" May 14 18:01:08.216080 containerd[1527]: time="2025-05-14T18:01:08.216001390Z" level=info msg="connecting to shim 217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12" address="unix:///run/containerd/s/2b896215b2db1d2d82550b474ee8d95e67442bd5cdf96295660e7859c0c2e78d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:08.241351 systemd[1]: Started cri-containerd-217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12.scope - libcontainer container 217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12. May 14 18:01:08.242323 kubelet[2645]: I0514 18:01:08.242167 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbs2\" (UniqueName: \"kubernetes.io/projected/dd74a5b9-5eed-44a2-b3ee-8cf5965b5479-kube-api-access-5cbs2\") pod \"tigera-operator-6f6897fdc5-xqhnd\" (UID: \"dd74a5b9-5eed-44a2-b3ee-8cf5965b5479\") " pod="tigera-operator/tigera-operator-6f6897fdc5-xqhnd" May 14 18:01:08.242323 kubelet[2645]: I0514 18:01:08.242252 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd74a5b9-5eed-44a2-b3ee-8cf5965b5479-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-xqhnd\" (UID: \"dd74a5b9-5eed-44a2-b3ee-8cf5965b5479\") " pod="tigera-operator/tigera-operator-6f6897fdc5-xqhnd" May 14 18:01:08.262860 containerd[1527]: time="2025-05-14T18:01:08.262826617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vwdgq,Uid:0412f886-9993-4369-bff8-ba68874542fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12\"" May 14 18:01:08.266608 containerd[1527]: time="2025-05-14T18:01:08.266576796Z" level=info msg="CreateContainer within sandbox \"217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:01:08.274462 containerd[1527]: time="2025-05-14T18:01:08.274421314Z" level=info msg="Container d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:08.281110 containerd[1527]: time="2025-05-14T18:01:08.281021266Z" level=info msg="CreateContainer within sandbox \"217257d1d56454f8f0d6788623133c9e74ea96ff58696eb7743f26d7ddc63b12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d\"" May 14 18:01:08.281520 containerd[1527]: time="2025-05-14T18:01:08.281497428Z" level=info msg="StartContainer for \"d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d\"" May 14 18:01:08.283248 containerd[1527]: time="2025-05-14T18:01:08.283170916Z" level=info msg="connecting to shim d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d" address="unix:///run/containerd/s/2b896215b2db1d2d82550b474ee8d95e67442bd5cdf96295660e7859c0c2e78d" protocol=ttrpc version=3 May 14 18:01:08.305339 systemd[1]: Started cri-containerd-d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d.scope - libcontainer container d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d. May 14 18:01:08.337511 containerd[1527]: time="2025-05-14T18:01:08.337470220Z" level=info msg="StartContainer for \"d5a0c114fb8d272b059e17613723db5fe38827da8586692f4a787dafa9b3d79d\" returns successfully" May 14 18:01:08.456348 containerd[1527]: time="2025-05-14T18:01:08.456244757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-xqhnd,Uid:dd74a5b9-5eed-44a2-b3ee-8cf5965b5479,Namespace:tigera-operator,Attempt:0,}" May 14 18:01:08.472271 containerd[1527]: time="2025-05-14T18:01:08.472222395Z" level=info msg="connecting to shim 54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963" address="unix:///run/containerd/s/bf91c81a1fc76eceeb424fb0b9ad77f3cbb936f733b2f384e264a02cbaa641f3" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:08.489457 systemd[1]: Started cri-containerd-54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963.scope - libcontainer container 54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963. May 14 18:01:08.533263 containerd[1527]: time="2025-05-14T18:01:08.533129451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-xqhnd,Uid:dd74a5b9-5eed-44a2-b3ee-8cf5965b5479,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963\"" May 14 18:01:08.535940 containerd[1527]: time="2025-05-14T18:01:08.535895744Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:01:08.650097 kubelet[2645]: I0514 18:01:08.650028 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vwdgq" podStartSLOduration=1.6500144190000001 podStartE2EDuration="1.650014419s" podCreationTimestamp="2025-05-14 18:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:08.649441376 +0000 UTC m=+6.122143300" watchObservedRunningTime="2025-05-14 18:01:08.650014419 +0000 UTC m=+6.122716343" May 14 18:01:10.420894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616979408.mount: Deactivated successfully. May 14 18:01:10.968995 containerd[1527]: time="2025-05-14T18:01:10.968945614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:10.969880 containerd[1527]: time="2025-05-14T18:01:10.969678657Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 18:01:10.970659 containerd[1527]: time="2025-05-14T18:01:10.970625901Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:10.972654 containerd[1527]: time="2025-05-14T18:01:10.972615270Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:10.973320 containerd[1527]: time="2025-05-14T18:01:10.973293312Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.437352328s" May 14 18:01:10.973426 containerd[1527]: time="2025-05-14T18:01:10.973409073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 18:01:10.976982 containerd[1527]: time="2025-05-14T18:01:10.976952408Z" level=info msg="CreateContainer within sandbox \"54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:01:10.982018 containerd[1527]: time="2025-05-14T18:01:10.981649068Z" level=info msg="Container 7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:10.986950 containerd[1527]: time="2025-05-14T18:01:10.986916051Z" level=info msg="CreateContainer within sandbox \"54cd8639167e16edf7e0145496436e8d398a5781cff9fe85b176de1d0d636963\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703\"" May 14 18:01:10.987296 containerd[1527]: time="2025-05-14T18:01:10.987260852Z" level=info msg="StartContainer for \"7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703\"" May 14 18:01:10.988072 containerd[1527]: time="2025-05-14T18:01:10.988024015Z" level=info msg="connecting to shim 7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703" address="unix:///run/containerd/s/bf91c81a1fc76eceeb424fb0b9ad77f3cbb936f733b2f384e264a02cbaa641f3" protocol=ttrpc version=3 May 14 18:01:11.008319 systemd[1]: Started cri-containerd-7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703.scope - libcontainer container 7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703. May 14 18:01:11.033911 containerd[1527]: time="2025-05-14T18:01:11.033880162Z" level=info msg="StartContainer for \"7c9f75f91f0d6e107b8a615a18ddf7954fc98add38240f7e5e4e2d7137c13703\" returns successfully" May 14 18:01:11.668665 kubelet[2645]: I0514 18:01:11.668597 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-xqhnd" podStartSLOduration=1.227335278 podStartE2EDuration="3.668580344s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="2025-05-14 18:01:08.534082495 +0000 UTC m=+6.006784419" lastFinishedPulling="2025-05-14 18:01:10.975327601 +0000 UTC m=+8.448029485" observedRunningTime="2025-05-14 18:01:11.668573143 +0000 UTC m=+9.141275107" watchObservedRunningTime="2025-05-14 18:01:11.668580344 +0000 UTC m=+9.141282268" May 14 18:01:15.223284 systemd[1]: Created slice kubepods-besteffort-podca6e7b18_65cf_4e05_86e9_56575a2c79e0.slice - libcontainer container kubepods-besteffort-podca6e7b18_65cf_4e05_86e9_56575a2c79e0.slice. May 14 18:01:15.298016 systemd[1]: Created slice kubepods-besteffort-pod298f7ffc_196b_4e32_ac5b_fcfa6b077645.slice - libcontainer container kubepods-besteffort-pod298f7ffc_196b_4e32_ac5b_fcfa6b077645.slice. May 14 18:01:15.387459 kubelet[2645]: I0514 18:01:15.387420 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca6e7b18-65cf-4e05-86e9-56575a2c79e0-typha-certs\") pod \"calico-typha-847bdc7655-fn2sd\" (UID: \"ca6e7b18-65cf-4e05-86e9-56575a2c79e0\") " pod="calico-system/calico-typha-847bdc7655-fn2sd" May 14 18:01:15.388172 kubelet[2645]: I0514 18:01:15.388146 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtn8\" (UniqueName: \"kubernetes.io/projected/ca6e7b18-65cf-4e05-86e9-56575a2c79e0-kube-api-access-qhtn8\") pod \"calico-typha-847bdc7655-fn2sd\" (UID: \"ca6e7b18-65cf-4e05-86e9-56575a2c79e0\") " pod="calico-system/calico-typha-847bdc7655-fn2sd" May 14 18:01:15.388315 kubelet[2645]: I0514 18:01:15.388300 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6e7b18-65cf-4e05-86e9-56575a2c79e0-tigera-ca-bundle\") pod \"calico-typha-847bdc7655-fn2sd\" (UID: \"ca6e7b18-65cf-4e05-86e9-56575a2c79e0\") " pod="calico-system/calico-typha-847bdc7655-fn2sd" May 14 18:01:15.401730 kubelet[2645]: E0514 18:01:15.401688 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:15.489425 kubelet[2645]: I0514 18:01:15.489217 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/298f7ffc-196b-4e32-ac5b-fcfa6b077645-node-certs\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.489425 kubelet[2645]: I0514 18:01:15.489381 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-flexvol-driver-host\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.489875 kubelet[2645]: I0514 18:01:15.489409 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5361bfb3-0c70-486e-8cd4-3c702e277eea-varrun\") pod \"csi-node-driver-tld67\" (UID: \"5361bfb3-0c70-486e-8cd4-3c702e277eea\") " pod="calico-system/csi-node-driver-tld67" May 14 18:01:15.489875 kubelet[2645]: I0514 18:01:15.489547 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-xtables-lock\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.489875 kubelet[2645]: I0514 18:01:15.489565 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/298f7ffc-196b-4e32-ac5b-fcfa6b077645-tigera-ca-bundle\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.489875 kubelet[2645]: I0514 18:01:15.489579 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-var-run-calico\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.489875 kubelet[2645]: I0514 18:01:15.489603 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5361bfb3-0c70-486e-8cd4-3c702e277eea-kubelet-dir\") pod \"csi-node-driver-tld67\" (UID: \"5361bfb3-0c70-486e-8cd4-3c702e277eea\") " pod="calico-system/csi-node-driver-tld67" May 14 18:01:15.490224 kubelet[2645]: I0514 18:01:15.489620 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkbg8\" (UniqueName: \"kubernetes.io/projected/5361bfb3-0c70-486e-8cd4-3c702e277eea-kube-api-access-qkbg8\") pod \"csi-node-driver-tld67\" (UID: \"5361bfb3-0c70-486e-8cd4-3c702e277eea\") " pod="calico-system/csi-node-driver-tld67" May 14 18:01:15.490224 kubelet[2645]: I0514 18:01:15.489638 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-policysync\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490224 kubelet[2645]: I0514 18:01:15.489660 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-cni-net-dir\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490224 kubelet[2645]: I0514 18:01:15.489691 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-cni-log-dir\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490224 kubelet[2645]: I0514 18:01:15.489709 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5361bfb3-0c70-486e-8cd4-3c702e277eea-registration-dir\") pod \"csi-node-driver-tld67\" (UID: \"5361bfb3-0c70-486e-8cd4-3c702e277eea\") " pod="calico-system/csi-node-driver-tld67" May 14 18:01:15.490321 kubelet[2645]: I0514 18:01:15.490216 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-lib-modules\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490321 kubelet[2645]: I0514 18:01:15.490266 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5361bfb3-0c70-486e-8cd4-3c702e277eea-socket-dir\") pod \"csi-node-driver-tld67\" (UID: \"5361bfb3-0c70-486e-8cd4-3c702e277eea\") " pod="calico-system/csi-node-driver-tld67" May 14 18:01:15.490321 kubelet[2645]: I0514 18:01:15.490301 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-cni-bin-dir\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490803 kubelet[2645]: I0514 18:01:15.490771 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/298f7ffc-196b-4e32-ac5b-fcfa6b077645-var-lib-calico\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.490860 kubelet[2645]: I0514 18:01:15.490805 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trwf6\" (UniqueName: \"kubernetes.io/projected/298f7ffc-196b-4e32-ac5b-fcfa6b077645-kube-api-access-trwf6\") pod \"calico-node-kcqzl\" (UID: \"298f7ffc-196b-4e32-ac5b-fcfa6b077645\") " pod="calico-system/calico-node-kcqzl" May 14 18:01:15.540134 containerd[1527]: time="2025-05-14T18:01:15.540094994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847bdc7655-fn2sd,Uid:ca6e7b18-65cf-4e05-86e9-56575a2c79e0,Namespace:calico-system,Attempt:0,}" May 14 18:01:15.570409 containerd[1527]: time="2025-05-14T18:01:15.570324647Z" level=info msg="connecting to shim b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649" address="unix:///run/containerd/s/e9097c0b4fc5e67575abc43f62c2b9c726449fbebfbd16c35164ef7793deb7d0" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:15.592587 kubelet[2645]: E0514 18:01:15.592279 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.592587 kubelet[2645]: W0514 18:01:15.592301 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.592587 kubelet[2645]: E0514 18:01:15.592403 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.592805 kubelet[2645]: E0514 18:01:15.592791 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.592854 kubelet[2645]: W0514 18:01:15.592844 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.592918 kubelet[2645]: E0514 18:01:15.592906 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.593136 kubelet[2645]: E0514 18:01:15.593119 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.593225 kubelet[2645]: W0514 18:01:15.593211 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.593320 kubelet[2645]: E0514 18:01:15.593307 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.593345 systemd[1]: Started cri-containerd-b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649.scope - libcontainer container b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649. May 14 18:01:15.596939 kubelet[2645]: E0514 18:01:15.596918 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.597124 kubelet[2645]: W0514 18:01:15.597016 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.597124 kubelet[2645]: E0514 18:01:15.597091 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.597252 kubelet[2645]: E0514 18:01:15.597240 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.597313 kubelet[2645]: W0514 18:01:15.597302 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.597481 kubelet[2645]: E0514 18:01:15.597444 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.597622 kubelet[2645]: E0514 18:01:15.597566 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.597622 kubelet[2645]: W0514 18:01:15.597577 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.597702 kubelet[2645]: E0514 18:01:15.597692 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.598032 kubelet[2645]: E0514 18:01:15.597880 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.598032 kubelet[2645]: W0514 18:01:15.597978 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.598214 kubelet[2645]: E0514 18:01:15.598141 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.598421 kubelet[2645]: E0514 18:01:15.598408 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.598494 kubelet[2645]: W0514 18:01:15.598483 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.598627 kubelet[2645]: E0514 18:01:15.598573 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.598733 kubelet[2645]: E0514 18:01:15.598724 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.598795 kubelet[2645]: W0514 18:01:15.598782 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.598883 kubelet[2645]: E0514 18:01:15.598864 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.599053 kubelet[2645]: E0514 18:01:15.599042 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.599146 kubelet[2645]: W0514 18:01:15.599106 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.599222 kubelet[2645]: E0514 18:01:15.599210 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.599453 kubelet[2645]: E0514 18:01:15.599398 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.599453 kubelet[2645]: W0514 18:01:15.599409 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.599557 kubelet[2645]: E0514 18:01:15.599532 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.599755 kubelet[2645]: E0514 18:01:15.599743 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.599850 kubelet[2645]: W0514 18:01:15.599799 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.599908 kubelet[2645]: E0514 18:01:15.599898 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.600105 kubelet[2645]: E0514 18:01:15.600052 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.600105 kubelet[2645]: W0514 18:01:15.600062 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.600278 kubelet[2645]: E0514 18:01:15.600260 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.600601 kubelet[2645]: E0514 18:01:15.600506 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.600601 kubelet[2645]: W0514 18:01:15.600518 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.600601 kubelet[2645]: E0514 18:01:15.600573 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.600742 kubelet[2645]: E0514 18:01:15.600730 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.601007 kubelet[2645]: W0514 18:01:15.600829 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.601007 kubelet[2645]: E0514 18:01:15.600855 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.601128 kubelet[2645]: E0514 18:01:15.601116 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.601175 kubelet[2645]: W0514 18:01:15.601164 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.601829 kubelet[2645]: E0514 18:01:15.601263 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.601829 kubelet[2645]: E0514 18:01:15.601393 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.601914 kubelet[2645]: W0514 18:01:15.601839 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.601914 kubelet[2645]: E0514 18:01:15.601907 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.603344 kubelet[2645]: E0514 18:01:15.603264 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.603344 kubelet[2645]: W0514 18:01:15.603282 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.603612 kubelet[2645]: E0514 18:01:15.603593 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.604640 kubelet[2645]: E0514 18:01:15.604591 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.604640 kubelet[2645]: W0514 18:01:15.604607 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.604723 kubelet[2645]: E0514 18:01:15.604699 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.605210 kubelet[2645]: E0514 18:01:15.605109 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.605210 kubelet[2645]: W0514 18:01:15.605123 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.605210 kubelet[2645]: E0514 18:01:15.605167 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.606252 kubelet[2645]: E0514 18:01:15.606235 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.606252 kubelet[2645]: W0514 18:01:15.606248 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.606628 kubelet[2645]: E0514 18:01:15.606578 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.606628 kubelet[2645]: W0514 18:01:15.606593 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.606628 kubelet[2645]: E0514 18:01:15.606605 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.606761 kubelet[2645]: E0514 18:01:15.606743 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.608790 kubelet[2645]: E0514 18:01:15.608737 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.608790 kubelet[2645]: W0514 18:01:15.608752 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.608790 kubelet[2645]: E0514 18:01:15.608765 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.612742 kubelet[2645]: E0514 18:01:15.612728 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:15.612813 kubelet[2645]: W0514 18:01:15.612801 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:15.612862 kubelet[2645]: E0514 18:01:15.612852 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:15.625496 containerd[1527]: time="2025-05-14T18:01:15.625460938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847bdc7655-fn2sd,Uid:ca6e7b18-65cf-4e05-86e9-56575a2c79e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649\"" May 14 18:01:15.627115 containerd[1527]: time="2025-05-14T18:01:15.627086143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:01:15.902623 containerd[1527]: time="2025-05-14T18:01:15.902330034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcqzl,Uid:298f7ffc-196b-4e32-ac5b-fcfa6b077645,Namespace:calico-system,Attempt:0,}" May 14 18:01:15.935365 containerd[1527]: time="2025-05-14T18:01:15.935318896Z" level=info msg="connecting to shim 67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c" address="unix:///run/containerd/s/7b428465c6d9f358a09c3d51644b0a83e5426062bf42d78837d7e4f4a67a5607" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:15.960355 systemd[1]: Started cri-containerd-67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c.scope - libcontainer container 67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c. May 14 18:01:15.982952 containerd[1527]: time="2025-05-14T18:01:15.982902403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcqzl,Uid:298f7ffc-196b-4e32-ac5b-fcfa6b077645,Namespace:calico-system,Attempt:0,} returns sandbox id \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\"" May 14 18:01:17.195737 containerd[1527]: time="2025-05-14T18:01:17.195693448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:17.196506 containerd[1527]: time="2025-05-14T18:01:17.196453650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 18:01:17.197206 containerd[1527]: time="2025-05-14T18:01:17.197088812Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:17.199100 containerd[1527]: time="2025-05-14T18:01:17.199065857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:17.200215 containerd[1527]: time="2025-05-14T18:01:17.199796059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.572681716s" May 14 18:01:17.200215 containerd[1527]: time="2025-05-14T18:01:17.200097660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 18:01:17.201631 containerd[1527]: time="2025-05-14T18:01:17.201603864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:01:17.208408 containerd[1527]: time="2025-05-14T18:01:17.207800081Z" level=info msg="CreateContainer within sandbox \"b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:01:17.215109 containerd[1527]: time="2025-05-14T18:01:17.214323339Z" level=info msg="Container e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:17.219839 containerd[1527]: time="2025-05-14T18:01:17.219781793Z" level=info msg="CreateContainer within sandbox \"b3a984638d32b65d70003dc7a2258ab33d6ad11bb0711fe04efde3c94b9d0649\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd\"" May 14 18:01:17.223337 containerd[1527]: time="2025-05-14T18:01:17.223261043Z" level=info msg="StartContainer for \"e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd\"" May 14 18:01:17.224767 containerd[1527]: time="2025-05-14T18:01:17.224739367Z" level=info msg="connecting to shim e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd" address="unix:///run/containerd/s/e9097c0b4fc5e67575abc43f62c2b9c726449fbebfbd16c35164ef7793deb7d0" protocol=ttrpc version=3 May 14 18:01:17.271376 systemd[1]: Started cri-containerd-e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd.scope - libcontainer container e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd. May 14 18:01:17.315624 containerd[1527]: time="2025-05-14T18:01:17.315583254Z" level=info msg="StartContainer for \"e425eac7d666038059c3ed67e889d404c94c8e42241ecb6aaca9a554a42891cd\" returns successfully" May 14 18:01:17.612861 kubelet[2645]: E0514 18:01:17.612716 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:17.684960 kubelet[2645]: I0514 18:01:17.684887 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847bdc7655-fn2sd" podStartSLOduration=1.110691257 podStartE2EDuration="2.684871218s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:15.626729701 +0000 UTC m=+13.099431625" lastFinishedPulling="2025-05-14 18:01:17.200909582 +0000 UTC m=+14.673611586" observedRunningTime="2025-05-14 18:01:17.684361816 +0000 UTC m=+15.157063740" watchObservedRunningTime="2025-05-14 18:01:17.684871218 +0000 UTC m=+15.157573142" May 14 18:01:17.702636 kubelet[2645]: E0514 18:01:17.702591 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.702636 kubelet[2645]: W0514 18:01:17.702618 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.702636 kubelet[2645]: E0514 18:01:17.702638 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.702827 kubelet[2645]: E0514 18:01:17.702807 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.702827 kubelet[2645]: W0514 18:01:17.702820 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.702873 kubelet[2645]: E0514 18:01:17.702831 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703001 kubelet[2645]: E0514 18:01:17.702978 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703001 kubelet[2645]: W0514 18:01:17.702989 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703001 kubelet[2645]: E0514 18:01:17.702999 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703157 kubelet[2645]: E0514 18:01:17.703134 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703157 kubelet[2645]: W0514 18:01:17.703145 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703157 kubelet[2645]: E0514 18:01:17.703153 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703326 kubelet[2645]: E0514 18:01:17.703301 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703326 kubelet[2645]: W0514 18:01:17.703313 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703326 kubelet[2645]: E0514 18:01:17.703321 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703486 kubelet[2645]: E0514 18:01:17.703473 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703486 kubelet[2645]: W0514 18:01:17.703484 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703529 kubelet[2645]: E0514 18:01:17.703493 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703615 kubelet[2645]: E0514 18:01:17.703604 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703615 kubelet[2645]: W0514 18:01:17.703614 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703659 kubelet[2645]: E0514 18:01:17.703621 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.703740 kubelet[2645]: E0514 18:01:17.703730 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.703763 kubelet[2645]: W0514 18:01:17.703740 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.703763 kubelet[2645]: E0514 18:01:17.703747 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704048 kubelet[2645]: E0514 18:01:17.704023 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704048 kubelet[2645]: W0514 18:01:17.704038 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704048 kubelet[2645]: E0514 18:01:17.704048 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704237 kubelet[2645]: E0514 18:01:17.704175 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704237 kubelet[2645]: W0514 18:01:17.704194 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704237 kubelet[2645]: E0514 18:01:17.704202 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704456 kubelet[2645]: E0514 18:01:17.704323 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704456 kubelet[2645]: W0514 18:01:17.704332 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704456 kubelet[2645]: E0514 18:01:17.704340 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704523 kubelet[2645]: E0514 18:01:17.704459 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704523 kubelet[2645]: W0514 18:01:17.704467 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704523 kubelet[2645]: E0514 18:01:17.704474 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704660 kubelet[2645]: E0514 18:01:17.704614 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704660 kubelet[2645]: W0514 18:01:17.704625 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704660 kubelet[2645]: E0514 18:01:17.704634 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704806 kubelet[2645]: E0514 18:01:17.704762 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704806 kubelet[2645]: W0514 18:01:17.704773 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704806 kubelet[2645]: E0514 18:01:17.704781 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.704957 kubelet[2645]: E0514 18:01:17.704892 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.704957 kubelet[2645]: W0514 18:01:17.704902 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.704957 kubelet[2645]: E0514 18:01:17.704909 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.705178 kubelet[2645]: E0514 18:01:17.705098 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.705178 kubelet[2645]: W0514 18:01:17.705109 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.705178 kubelet[2645]: E0514 18:01:17.705117 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.705413 kubelet[2645]: E0514 18:01:17.705340 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.705413 kubelet[2645]: W0514 18:01:17.705354 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.705413 kubelet[2645]: E0514 18:01:17.705374 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.705638 kubelet[2645]: E0514 18:01:17.705555 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.705638 kubelet[2645]: W0514 18:01:17.705567 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.705638 kubelet[2645]: E0514 18:01:17.705580 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.705805 kubelet[2645]: E0514 18:01:17.705785 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.705839 kubelet[2645]: W0514 18:01:17.705805 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.705949 kubelet[2645]: E0514 18:01:17.705847 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.706037 kubelet[2645]: E0514 18:01:17.706021 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.706037 kubelet[2645]: W0514 18:01:17.706034 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.706145 kubelet[2645]: E0514 18:01:17.706047 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.706145 kubelet[2645]: E0514 18:01:17.706251 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.706145 kubelet[2645]: W0514 18:01:17.706260 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.706145 kubelet[2645]: E0514 18:01:17.706274 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.706454 kubelet[2645]: E0514 18:01:17.706438 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.706491 kubelet[2645]: W0514 18:01:17.706449 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.706491 kubelet[2645]: E0514 18:01:17.706483 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.706760 kubelet[2645]: E0514 18:01:17.706693 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.706760 kubelet[2645]: W0514 18:01:17.706710 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.706760 kubelet[2645]: E0514 18:01:17.706728 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.706865 kubelet[2645]: E0514 18:01:17.706851 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.706865 kubelet[2645]: W0514 18:01:17.706861 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707001 kubelet[2645]: E0514 18:01:17.706892 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707001 kubelet[2645]: E0514 18:01:17.706982 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707001 kubelet[2645]: W0514 18:01:17.706989 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707070 kubelet[2645]: E0514 18:01:17.707009 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707205 kubelet[2645]: E0514 18:01:17.707096 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707205 kubelet[2645]: W0514 18:01:17.707106 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707205 kubelet[2645]: E0514 18:01:17.707120 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707393 kubelet[2645]: E0514 18:01:17.707300 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707393 kubelet[2645]: W0514 18:01:17.707311 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707393 kubelet[2645]: E0514 18:01:17.707325 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707550 kubelet[2645]: E0514 18:01:17.707477 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707550 kubelet[2645]: W0514 18:01:17.707487 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707550 kubelet[2645]: E0514 18:01:17.707500 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707711 kubelet[2645]: E0514 18:01:17.707688 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707711 kubelet[2645]: W0514 18:01:17.707704 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707757 kubelet[2645]: E0514 18:01:17.707720 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.707935 kubelet[2645]: E0514 18:01:17.707854 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.707935 kubelet[2645]: W0514 18:01:17.707864 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.707935 kubelet[2645]: E0514 18:01:17.707880 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.708052 kubelet[2645]: E0514 18:01:17.708026 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.708052 kubelet[2645]: W0514 18:01:17.708034 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.708052 kubelet[2645]: E0514 18:01:17.708047 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.708274 kubelet[2645]: E0514 18:01:17.708256 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.708274 kubelet[2645]: W0514 18:01:17.708272 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.708328 kubelet[2645]: E0514 18:01:17.708287 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:17.708505 kubelet[2645]: E0514 18:01:17.708442 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:01:17.708505 kubelet[2645]: W0514 18:01:17.708454 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:01:17.708505 kubelet[2645]: E0514 18:01:17.708462 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:01:18.563696 containerd[1527]: time="2025-05-14T18:01:18.563646951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:18.564587 containerd[1527]: time="2025-05-14T18:01:18.564467393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 18:01:18.565242 containerd[1527]: time="2025-05-14T18:01:18.565217795Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:18.567152 containerd[1527]: time="2025-05-14T18:01:18.567116079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:18.568627 containerd[1527]: time="2025-05-14T18:01:18.568579803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.366710299s" May 14 18:01:18.568627 containerd[1527]: time="2025-05-14T18:01:18.568620043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 18:01:18.572298 containerd[1527]: time="2025-05-14T18:01:18.572015012Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:01:18.578418 containerd[1527]: time="2025-05-14T18:01:18.578375308Z" level=info msg="Container 19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:18.597717 containerd[1527]: time="2025-05-14T18:01:18.597662037Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\"" May 14 18:01:18.598464 containerd[1527]: time="2025-05-14T18:01:18.598395839Z" level=info msg="StartContainer for \"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\"" May 14 18:01:18.599779 containerd[1527]: time="2025-05-14T18:01:18.599753283Z" level=info msg="connecting to shim 19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61" address="unix:///run/containerd/s/7b428465c6d9f358a09c3d51644b0a83e5426062bf42d78837d7e4f4a67a5607" protocol=ttrpc version=3 May 14 18:01:18.629412 systemd[1]: Started cri-containerd-19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61.scope - libcontainer container 19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61. May 14 18:01:18.662817 containerd[1527]: time="2025-05-14T18:01:18.662778563Z" level=info msg="StartContainer for \"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\" returns successfully" May 14 18:01:18.678163 kubelet[2645]: I0514 18:01:18.678123 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:01:18.691810 systemd[1]: cri-containerd-19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61.scope: Deactivated successfully. May 14 18:01:18.692097 systemd[1]: cri-containerd-19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61.scope: Consumed 41ms CPU time, 7.9M memory peak, 6.1M written to disk. May 14 18:01:18.743209 containerd[1527]: time="2025-05-14T18:01:18.743086088Z" level=info msg="received exit event container_id:\"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\" id:\"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\" pid:3247 exited_at:{seconds:1747245678 nanos:715203017}" May 14 18:01:18.745455 containerd[1527]: time="2025-05-14T18:01:18.745419534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\" id:\"19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61\" pid:3247 exited_at:{seconds:1747245678 nanos:715203017}" May 14 18:01:18.790130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19c561e73acc57401f6654914b547593107c7c8554120f13c6c44127526e5d61-rootfs.mount: Deactivated successfully. May 14 18:01:19.016396 update_engine[1517]: I20250514 18:01:19.016285 1517 update_attempter.cc:509] Updating boot flags... May 14 18:01:19.612764 kubelet[2645]: E0514 18:01:19.612705 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:19.697392 containerd[1527]: time="2025-05-14T18:01:19.697338969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:01:21.612450 kubelet[2645]: E0514 18:01:21.612379 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:23.612778 kubelet[2645]: E0514 18:01:23.612720 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:23.650349 containerd[1527]: time="2025-05-14T18:01:23.650290680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:23.650725 containerd[1527]: time="2025-05-14T18:01:23.650677961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 18:01:23.651584 containerd[1527]: time="2025-05-14T18:01:23.651546443Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:23.654000 containerd[1527]: time="2025-05-14T18:01:23.653250766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:23.654052 containerd[1527]: time="2025-05-14T18:01:23.653998887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.956607358s" May 14 18:01:23.654052 containerd[1527]: time="2025-05-14T18:01:23.654028087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 18:01:23.657521 containerd[1527]: time="2025-05-14T18:01:23.657478854Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:01:23.669762 containerd[1527]: time="2025-05-14T18:01:23.667323272Z" level=info msg="Container dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:23.670739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587122911.mount: Deactivated successfully. May 14 18:01:23.676139 containerd[1527]: time="2025-05-14T18:01:23.676022568Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\"" May 14 18:01:23.676761 containerd[1527]: time="2025-05-14T18:01:23.676685449Z" level=info msg="StartContainer for \"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\"" May 14 18:01:23.679063 containerd[1527]: time="2025-05-14T18:01:23.679009453Z" level=info msg="connecting to shim dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb" address="unix:///run/containerd/s/7b428465c6d9f358a09c3d51644b0a83e5426062bf42d78837d7e4f4a67a5607" protocol=ttrpc version=3 May 14 18:01:23.710405 systemd[1]: Started cri-containerd-dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb.scope - libcontainer container dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb. May 14 18:01:23.761689 containerd[1527]: time="2025-05-14T18:01:23.761646086Z" level=info msg="StartContainer for \"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\" returns successfully" May 14 18:01:24.273474 systemd[1]: cri-containerd-dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb.scope: Deactivated successfully. May 14 18:01:24.273781 systemd[1]: cri-containerd-dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb.scope: Consumed 496ms CPU time, 157M memory peak, 4K read from disk, 150.3M written to disk. May 14 18:01:24.285000 containerd[1527]: time="2025-05-14T18:01:24.284842459Z" level=info msg="received exit event container_id:\"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\" id:\"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\" pid:3322 exited_at:{seconds:1747245684 nanos:284364618}" May 14 18:01:24.285000 containerd[1527]: time="2025-05-14T18:01:24.284953459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\" id:\"dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb\" pid:3322 exited_at:{seconds:1747245684 nanos:284364618}" May 14 18:01:24.303907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dda0a8e9ae4800ca6ba2f8007faaca9c0dd194d59697ef4b638838315ba2cbdb-rootfs.mount: Deactivated successfully. May 14 18:01:24.322141 kubelet[2645]: I0514 18:01:24.322072 2645 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:01:24.426912 systemd[1]: Created slice kubepods-burstable-pod26612257_e3fe_4e30_ba37_ea09f7734c9b.slice - libcontainer container kubepods-burstable-pod26612257_e3fe_4e30_ba37_ea09f7734c9b.slice. May 14 18:01:24.432206 kubelet[2645]: W0514 18:01:24.431503 2645 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 14 18:01:24.433216 kubelet[2645]: W0514 18:01:24.432975 2645 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 14 18:01:24.439616 kubelet[2645]: E0514 18:01:24.439449 2645 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 14 18:01:24.439897 kubelet[2645]: E0514 18:01:24.439741 2645 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 14 18:01:24.442352 systemd[1]: Created slice kubepods-burstable-pod0bf01f1f_2621_4637_a52b_fd8dbc92e2ea.slice - libcontainer container kubepods-burstable-pod0bf01f1f_2621_4637_a52b_fd8dbc92e2ea.slice. May 14 18:01:24.452289 systemd[1]: Created slice kubepods-besteffort-podd9760f37_4aa8_4778_a1ce_3cc8769fff10.slice - libcontainer container kubepods-besteffort-podd9760f37_4aa8_4778_a1ce_3cc8769fff10.slice. May 14 18:01:24.466447 systemd[1]: Created slice kubepods-besteffort-pod908afd2c_b056_4674_b396_0c1c7595ceeb.slice - libcontainer container kubepods-besteffort-pod908afd2c_b056_4674_b396_0c1c7595ceeb.slice. May 14 18:01:24.472872 systemd[1]: Created slice kubepods-besteffort-pod1238464b_10ae_4c67_ade5_0e48aef83d6e.slice - libcontainer container kubepods-besteffort-pod1238464b_10ae_4c67_ade5_0e48aef83d6e.slice. May 14 18:01:24.571581 kubelet[2645]: I0514 18:01:24.571461 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxwh7\" (UniqueName: \"kubernetes.io/projected/0bf01f1f-2621-4637-a52b-fd8dbc92e2ea-kube-api-access-xxwh7\") pod \"coredns-6f6b679f8f-29j4q\" (UID: \"0bf01f1f-2621-4637-a52b-fd8dbc92e2ea\") " pod="kube-system/coredns-6f6b679f8f-29j4q" May 14 18:01:24.571581 kubelet[2645]: I0514 18:01:24.571516 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/908afd2c-b056-4674-b396-0c1c7595ceeb-calico-apiserver-certs\") pod \"calico-apiserver-84d447bc64-xbhrh\" (UID: \"908afd2c-b056-4674-b396-0c1c7595ceeb\") " pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" May 14 18:01:24.571581 kubelet[2645]: I0514 18:01:24.571537 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8rq\" (UniqueName: \"kubernetes.io/projected/908afd2c-b056-4674-b396-0c1c7595ceeb-kube-api-access-nl8rq\") pod \"calico-apiserver-84d447bc64-xbhrh\" (UID: \"908afd2c-b056-4674-b396-0c1c7595ceeb\") " pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" May 14 18:01:24.571581 kubelet[2645]: I0514 18:01:24.571565 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9760f37-4aa8-4778-a1ce-3cc8769fff10-tigera-ca-bundle\") pod \"calico-kube-controllers-768945bb6-nd72r\" (UID: \"d9760f37-4aa8-4778-a1ce-3cc8769fff10\") " pod="calico-system/calico-kube-controllers-768945bb6-nd72r" May 14 18:01:24.571760 kubelet[2645]: I0514 18:01:24.571587 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26612257-e3fe-4e30-ba37-ea09f7734c9b-config-volume\") pod \"coredns-6f6b679f8f-ph2n7\" (UID: \"26612257-e3fe-4e30-ba37-ea09f7734c9b\") " pod="kube-system/coredns-6f6b679f8f-ph2n7" May 14 18:01:24.572149 kubelet[2645]: I0514 18:01:24.572100 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shlv\" (UniqueName: \"kubernetes.io/projected/26612257-e3fe-4e30-ba37-ea09f7734c9b-kube-api-access-4shlv\") pod \"coredns-6f6b679f8f-ph2n7\" (UID: \"26612257-e3fe-4e30-ba37-ea09f7734c9b\") " pod="kube-system/coredns-6f6b679f8f-ph2n7" May 14 18:01:24.572210 kubelet[2645]: I0514 18:01:24.572168 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hlw4\" (UniqueName: \"kubernetes.io/projected/1238464b-10ae-4c67-ade5-0e48aef83d6e-kube-api-access-6hlw4\") pod \"calico-apiserver-84d447bc64-xbnsb\" (UID: \"1238464b-10ae-4c67-ade5-0e48aef83d6e\") " pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" May 14 18:01:24.572254 kubelet[2645]: I0514 18:01:24.572237 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7jb\" (UniqueName: \"kubernetes.io/projected/d9760f37-4aa8-4778-a1ce-3cc8769fff10-kube-api-access-5n7jb\") pod \"calico-kube-controllers-768945bb6-nd72r\" (UID: \"d9760f37-4aa8-4778-a1ce-3cc8769fff10\") " pod="calico-system/calico-kube-controllers-768945bb6-nd72r" May 14 18:01:24.572288 kubelet[2645]: I0514 18:01:24.572263 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1238464b-10ae-4c67-ade5-0e48aef83d6e-calico-apiserver-certs\") pod \"calico-apiserver-84d447bc64-xbnsb\" (UID: \"1238464b-10ae-4c67-ade5-0e48aef83d6e\") " pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" May 14 18:01:24.572323 kubelet[2645]: I0514 18:01:24.572282 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf01f1f-2621-4637-a52b-fd8dbc92e2ea-config-volume\") pod \"coredns-6f6b679f8f-29j4q\" (UID: \"0bf01f1f-2621-4637-a52b-fd8dbc92e2ea\") " pod="kube-system/coredns-6f6b679f8f-29j4q" May 14 18:01:24.711235 containerd[1527]: time="2025-05-14T18:01:24.710966276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:01:24.736664 containerd[1527]: time="2025-05-14T18:01:24.736609320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ph2n7,Uid:26612257-e3fe-4e30-ba37-ea09f7734c9b,Namespace:kube-system,Attempt:0,}" May 14 18:01:24.747038 containerd[1527]: time="2025-05-14T18:01:24.746846498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29j4q,Uid:0bf01f1f-2621-4637-a52b-fd8dbc92e2ea,Namespace:kube-system,Attempt:0,}" May 14 18:01:24.774202 containerd[1527]: time="2025-05-14T18:01:24.773596664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768945bb6-nd72r,Uid:d9760f37-4aa8-4778-a1ce-3cc8769fff10,Namespace:calico-system,Attempt:0,}" May 14 18:01:25.222057 containerd[1527]: time="2025-05-14T18:01:25.221941616Z" level=error msg="Failed to destroy network for sandbox \"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.223672 containerd[1527]: time="2025-05-14T18:01:25.223606139Z" level=error msg="Failed to destroy network for sandbox \"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.224121 containerd[1527]: time="2025-05-14T18:01:25.224070860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29j4q,Uid:0bf01f1f-2621-4637-a52b-fd8dbc92e2ea,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.225233 containerd[1527]: time="2025-05-14T18:01:25.225004581Z" level=error msg="Failed to destroy network for sandbox \"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.225844 containerd[1527]: time="2025-05-14T18:01:25.225704822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ph2n7,Uid:26612257-e3fe-4e30-ba37-ea09f7734c9b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.227053 containerd[1527]: time="2025-05-14T18:01:25.227017384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768945bb6-nd72r,Uid:d9760f37-4aa8-4778-a1ce-3cc8769fff10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.228643 kubelet[2645]: E0514 18:01:25.228581 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.228921 kubelet[2645]: E0514 18:01:25.228673 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ph2n7" May 14 18:01:25.228921 kubelet[2645]: E0514 18:01:25.228694 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ph2n7" May 14 18:01:25.228921 kubelet[2645]: E0514 18:01:25.228746 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ph2n7_kube-system(26612257-e3fe-4e30-ba37-ea09f7734c9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ph2n7_kube-system(26612257-e3fe-4e30-ba37-ea09f7734c9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aaa1f19b8554a4247aab0c415ab6cd21cdba1461654db320095a8c6ad4a4ae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ph2n7" podUID="26612257-e3fe-4e30-ba37-ea09f7734c9b" May 14 18:01:25.229050 kubelet[2645]: E0514 18:01:25.229014 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.229084 kubelet[2645]: E0514 18:01:25.229049 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-768945bb6-nd72r" May 14 18:01:25.229084 kubelet[2645]: E0514 18:01:25.229066 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-768945bb6-nd72r" May 14 18:01:25.229143 kubelet[2645]: E0514 18:01:25.229098 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-768945bb6-nd72r_calico-system(d9760f37-4aa8-4778-a1ce-3cc8769fff10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-768945bb6-nd72r_calico-system(d9760f37-4aa8-4778-a1ce-3cc8769fff10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12f329c7f1c1dceb5f8174f6adf71345be3a7d297958f8426b15360436458bbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-768945bb6-nd72r" podUID="d9760f37-4aa8-4778-a1ce-3cc8769fff10" May 14 18:01:25.229814 kubelet[2645]: E0514 18:01:25.229758 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.229814 kubelet[2645]: E0514 18:01:25.229810 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-29j4q" May 14 18:01:25.229918 kubelet[2645]: E0514 18:01:25.229827 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-29j4q" May 14 18:01:25.229918 kubelet[2645]: E0514 18:01:25.229869 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-29j4q_kube-system(0bf01f1f-2621-4637-a52b-fd8dbc92e2ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-29j4q_kube-system(0bf01f1f-2621-4637-a52b-fd8dbc92e2ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dc65cec7e3b358e7be1b1b0e22b019903034f42c8a01e059a6fbda0e0a50842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-29j4q" podUID="0bf01f1f-2621-4637-a52b-fd8dbc92e2ea" May 14 18:01:25.620392 systemd[1]: Created slice kubepods-besteffort-pod5361bfb3_0c70_486e_8cd4_3c702e277eea.slice - libcontainer container kubepods-besteffort-pod5361bfb3_0c70_486e_8cd4_3c702e277eea.slice. May 14 18:01:25.624666 containerd[1527]: time="2025-05-14T18:01:25.624632469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tld67,Uid:5361bfb3-0c70-486e-8cd4-3c702e277eea,Namespace:calico-system,Attempt:0,}" May 14 18:01:25.678203 kubelet[2645]: E0514 18:01:25.677903 2645 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 14 18:01:25.679643 kubelet[2645]: E0514 18:01:25.678334 2645 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 14 18:01:25.679643 kubelet[2645]: E0514 18:01:25.678839 2645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1238464b-10ae-4c67-ade5-0e48aef83d6e-calico-apiserver-certs podName:1238464b-10ae-4c67-ade5-0e48aef83d6e nodeName:}" failed. No retries permitted until 2025-05-14 18:01:26.178814477 +0000 UTC m=+23.651516401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1238464b-10ae-4c67-ade5-0e48aef83d6e-calico-apiserver-certs") pod "calico-apiserver-84d447bc64-xbnsb" (UID: "1238464b-10ae-4c67-ade5-0e48aef83d6e") : failed to sync secret cache: timed out waiting for the condition May 14 18:01:25.679643 kubelet[2645]: E0514 18:01:25.678921 2645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/908afd2c-b056-4674-b396-0c1c7595ceeb-calico-apiserver-certs podName:908afd2c-b056-4674-b396-0c1c7595ceeb nodeName:}" failed. No retries permitted until 2025-05-14 18:01:26.178903757 +0000 UTC m=+23.651605681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/908afd2c-b056-4674-b396-0c1c7595ceeb-calico-apiserver-certs") pod "calico-apiserver-84d447bc64-xbhrh" (UID: "908afd2c-b056-4674-b396-0c1c7595ceeb") : failed to sync secret cache: timed out waiting for the condition May 14 18:01:25.680716 containerd[1527]: time="2025-05-14T18:01:25.679510118Z" level=error msg="Failed to destroy network for sandbox \"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.680716 containerd[1527]: time="2025-05-14T18:01:25.680639640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tld67,Uid:5361bfb3-0c70-486e-8cd4-3c702e277eea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.681434 kubelet[2645]: E0514 18:01:25.681131 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:25.681569 kubelet[2645]: E0514 18:01:25.681531 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tld67" May 14 18:01:25.681936 kubelet[2645]: E0514 18:01:25.681719 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tld67" May 14 18:01:25.682885 kubelet[2645]: E0514 18:01:25.682812 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tld67_calico-system(5361bfb3-0c70-486e-8cd4-3c702e277eea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tld67_calico-system(5361bfb3-0c70-486e-8cd4-3c702e277eea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e3084df7ee65a8b947b9bd74e0655dc5aa8422d085d10390bb22d0676f685f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tld67" podUID="5361bfb3-0c70-486e-8cd4-3c702e277eea" May 14 18:01:25.685219 kubelet[2645]: E0514 18:01:25.685002 2645 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.688063 kubelet[2645]: E0514 18:01:25.688004 2645 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.692895 kubelet[2645]: E0514 18:01:25.692824 2645 projected.go:194] Error preparing data for projected volume kube-api-access-6hlw4 for pod calico-apiserver/calico-apiserver-84d447bc64-xbnsb: failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.693603 kubelet[2645]: E0514 18:01:25.693556 2645 projected.go:194] Error preparing data for projected volume kube-api-access-nl8rq for pod calico-apiserver/calico-apiserver-84d447bc64-xbhrh: failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.693705 kubelet[2645]: E0514 18:01:25.693694 2645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1238464b-10ae-4c67-ade5-0e48aef83d6e-kube-api-access-6hlw4 podName:1238464b-10ae-4c67-ade5-0e48aef83d6e nodeName:}" failed. No retries permitted until 2025-05-14 18:01:26.193671341 +0000 UTC m=+23.666373265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6hlw4" (UniqueName: "kubernetes.io/projected/1238464b-10ae-4c67-ade5-0e48aef83d6e-kube-api-access-6hlw4") pod "calico-apiserver-84d447bc64-xbnsb" (UID: "1238464b-10ae-4c67-ade5-0e48aef83d6e") : failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.693836 kubelet[2645]: E0514 18:01:25.693814 2645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/908afd2c-b056-4674-b396-0c1c7595ceeb-kube-api-access-nl8rq podName:908afd2c-b056-4674-b396-0c1c7595ceeb nodeName:}" failed. No retries permitted until 2025-05-14 18:01:26.193804302 +0000 UTC m=+23.666506226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nl8rq" (UniqueName: "kubernetes.io/projected/908afd2c-b056-4674-b396-0c1c7595ceeb-kube-api-access-nl8rq") pod "calico-apiserver-84d447bc64-xbhrh" (UID: "908afd2c-b056-4674-b396-0c1c7595ceeb") : failed to sync configmap cache: timed out waiting for the condition May 14 18:01:25.694102 systemd[1]: run-netns-cni\x2d8ded43be\x2d55a0\x2d715d\x2d6bb9\x2dc18d1827d0c0.mount: Deactivated successfully. May 14 18:01:25.694638 systemd[1]: run-netns-cni\x2d9d59eff0\x2d2741\x2d725d\x2de329\x2d99234e6d2892.mount: Deactivated successfully. May 14 18:01:26.573647 containerd[1527]: time="2025-05-14T18:01:26.573606231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbhrh,Uid:908afd2c-b056-4674-b396-0c1c7595ceeb,Namespace:calico-apiserver,Attempt:0,}" May 14 18:01:26.577422 containerd[1527]: time="2025-05-14T18:01:26.577385956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbnsb,Uid:1238464b-10ae-4c67-ade5-0e48aef83d6e,Namespace:calico-apiserver,Attempt:0,}" May 14 18:01:26.639052 containerd[1527]: time="2025-05-14T18:01:26.638938370Z" level=error msg="Failed to destroy network for sandbox \"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.640729 containerd[1527]: time="2025-05-14T18:01:26.640627292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbhrh,Uid:908afd2c-b056-4674-b396-0c1c7595ceeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.641140 kubelet[2645]: E0514 18:01:26.641102 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.641900 kubelet[2645]: E0514 18:01:26.641164 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" May 14 18:01:26.641900 kubelet[2645]: E0514 18:01:26.641223 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" May 14 18:01:26.641900 kubelet[2645]: E0514 18:01:26.641275 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84d447bc64-xbhrh_calico-apiserver(908afd2c-b056-4674-b396-0c1c7595ceeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84d447bc64-xbhrh_calico-apiserver(908afd2c-b056-4674-b396-0c1c7595ceeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9846addc10cbac13bb64729bd490749a523a246d3e12241e587fec44fe15a5f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" podUID="908afd2c-b056-4674-b396-0c1c7595ceeb" May 14 18:01:26.653002 containerd[1527]: time="2025-05-14T18:01:26.652944711Z" level=error msg="Failed to destroy network for sandbox \"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.654397 containerd[1527]: time="2025-05-14T18:01:26.654339753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbnsb,Uid:1238464b-10ae-4c67-ade5-0e48aef83d6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.655339 kubelet[2645]: E0514 18:01:26.654561 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:01:26.655339 kubelet[2645]: E0514 18:01:26.654616 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" May 14 18:01:26.655339 kubelet[2645]: E0514 18:01:26.654639 2645 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" May 14 18:01:26.655468 kubelet[2645]: E0514 18:01:26.654683 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84d447bc64-xbnsb_calico-apiserver(1238464b-10ae-4c67-ade5-0e48aef83d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84d447bc64-xbnsb_calico-apiserver(1238464b-10ae-4c67-ade5-0e48aef83d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc265b59ef4ad8b070739e67851a9d228606e54307f47c2a19ecff957769ae9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" podUID="1238464b-10ae-4c67-ade5-0e48aef83d6e" May 14 18:01:28.557091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347052699.mount: Deactivated successfully. May 14 18:01:28.621625 containerd[1527]: time="2025-05-14T18:01:28.621539935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 18:01:28.625026 containerd[1527]: time="2025-05-14T18:01:28.624977620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.913969984s" May 14 18:01:28.625286 containerd[1527]: time="2025-05-14T18:01:28.625140220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 18:01:28.631721 containerd[1527]: time="2025-05-14T18:01:28.631666589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:28.632389 containerd[1527]: time="2025-05-14T18:01:28.632310070Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:28.633358 containerd[1527]: time="2025-05-14T18:01:28.633321271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:28.634993 containerd[1527]: time="2025-05-14T18:01:28.634957153Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:01:28.711447 containerd[1527]: time="2025-05-14T18:01:28.711352535Z" level=info msg="Container f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:28.749338 containerd[1527]: time="2025-05-14T18:01:28.749294866Z" level=info msg="CreateContainer within sandbox \"67e99c59678a5be89db39adcd39d5a3f45ae76aa010f038214f415d448aa716c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\"" May 14 18:01:28.749827 containerd[1527]: time="2025-05-14T18:01:28.749788867Z" level=info msg="StartContainer for \"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\"" May 14 18:01:28.751293 containerd[1527]: time="2025-05-14T18:01:28.751251909Z" level=info msg="connecting to shim f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577" address="unix:///run/containerd/s/7b428465c6d9f358a09c3d51644b0a83e5426062bf42d78837d7e4f4a67a5607" protocol=ttrpc version=3 May 14 18:01:28.777361 systemd[1]: Started cri-containerd-f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577.scope - libcontainer container f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577. May 14 18:01:28.815804 containerd[1527]: time="2025-05-14T18:01:28.815715195Z" level=info msg="StartContainer for \"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" returns successfully" May 14 18:01:28.989965 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:01:28.990069 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:01:29.741194 kubelet[2645]: I0514 18:01:29.741123 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kcqzl" podStartSLOduration=2.100507479 podStartE2EDuration="14.74110829s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:15.98526237 +0000 UTC m=+13.457964254" lastFinishedPulling="2025-05-14 18:01:28.625863141 +0000 UTC m=+26.098565065" observedRunningTime="2025-05-14 18:01:29.737664326 +0000 UTC m=+27.210366250" watchObservedRunningTime="2025-05-14 18:01:29.74110829 +0000 UTC m=+27.213810214" May 14 18:01:30.239662 containerd[1527]: time="2025-05-14T18:01:30.237960174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" id:\"2e1d8550d184a65f7f7c021da3be7100d5c4ea39b1f9a8687964785039db5c9a\" pid:3664 exit_status:1 exited_at:{seconds:1747245690 nanos:237639494}" May 14 18:01:30.357687 containerd[1527]: time="2025-05-14T18:01:30.357642314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" id:\"9a81c872a8554293bcdf80dc0c2886a0bf74b064719169d8fcaea4c69ebe6d69\" pid:3690 exit_status:1 exited_at:{seconds:1747245690 nanos:357328674}" May 14 18:01:30.784007 containerd[1527]: time="2025-05-14T18:01:30.783958215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" id:\"75952d34432bd76b95ec9809834df910b4980f43fcee7372bd5cf179b41f8996\" pid:3811 exit_status:1 exited_at:{seconds:1747245690 nanos:783687695}" May 14 18:01:31.791986 containerd[1527]: time="2025-05-14T18:01:31.791928701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" id:\"4426e9d218be08301bb451890ab4265ff7bf649e0982e22fc0efb144da58a47c\" pid:3860 exit_status:1 exited_at:{seconds:1747245691 nanos:791569741}" May 14 18:01:34.272971 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). May 14 18:01:34.350149 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:34.352283 sshd-session[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:34.357252 systemd-logind[1513]: New session 8 of user core. May 14 18:01:34.369425 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:01:34.509057 sshd[3927]: Connection closed by 10.0.0.1 port 57224 May 14 18:01:34.508882 sshd-session[3923]: pam_unix(sshd:session): session closed for user core May 14 18:01:34.514318 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. May 14 18:01:34.514603 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:57224.service: Deactivated successfully. May 14 18:01:34.518070 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:01:34.520340 systemd-logind[1513]: Removed session 8. May 14 18:01:36.614200 containerd[1527]: time="2025-05-14T18:01:36.614152939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tld67,Uid:5361bfb3-0c70-486e-8cd4-3c702e277eea,Namespace:calico-system,Attempt:0,}" May 14 18:01:37.081515 systemd-networkd[1423]: calie94076b5bd3: Link UP May 14 18:01:37.082029 systemd-networkd[1423]: calie94076b5bd3: Gained carrier May 14 18:01:37.099627 containerd[1527]: 2025-05-14 18:01:36.643 [INFO][3999] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:01:37.099627 containerd[1527]: 2025-05-14 18:01:36.761 [INFO][3999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tld67-eth0 csi-node-driver- calico-system 5361bfb3-0c70-486e-8cd4-3c702e277eea 617 0 2025-05-14 18:01:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tld67 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie94076b5bd3 [] []}} ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-" May 14 18:01:37.099627 containerd[1527]: 2025-05-14 18:01:36.761 [INFO][3999] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.099627 containerd[1527]: 2025-05-14 18:01:36.918 [INFO][4034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" HandleID="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Workload="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:36.935 [INFO][4034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" HandleID="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Workload="localhost-k8s-csi--node--driver--tld67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059a230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tld67", "timestamp":"2025-05-14 18:01:36.918874742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:36.936 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:36.936 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:36.936 [INFO][4034] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:36.939 [INFO][4034] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" host="localhost" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:37.031 [INFO][4034] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:37.038 [INFO][4034] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:37.040 [INFO][4034] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:37.042 [INFO][4034] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:37.100060 containerd[1527]: 2025-05-14 18:01:37.043 [INFO][4034] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" host="localhost" May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.045 [INFO][4034] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47 May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.050 [INFO][4034] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" host="localhost" May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.056 [INFO][4034] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" host="localhost" May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.056 [INFO][4034] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" host="localhost" May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.056 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:37.100433 containerd[1527]: 2025-05-14 18:01:37.056 [INFO][4034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" HandleID="k8s-pod-network.38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Workload="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.100602 containerd[1527]: 2025-05-14 18:01:37.062 [INFO][3999] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tld67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5361bfb3-0c70-486e-8cd4-3c702e277eea", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tld67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie94076b5bd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:37.100602 containerd[1527]: 2025-05-14 18:01:37.063 [INFO][3999] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.101923 containerd[1527]: 2025-05-14 18:01:37.063 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie94076b5bd3 ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.101923 containerd[1527]: 2025-05-14 18:01:37.082 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.102138 containerd[1527]: 2025-05-14 18:01:37.082 [INFO][3999] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tld67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5361bfb3-0c70-486e-8cd4-3c702e277eea", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47", Pod:"csi-node-driver-tld67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie94076b5bd3", MAC:"22:5b:b7:03:20:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:37.102260 containerd[1527]: 2025-05-14 18:01:37.094 [INFO][3999] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" Namespace="calico-system" Pod="csi-node-driver-tld67" WorkloadEndpoint="localhost-k8s-csi--node--driver--tld67-eth0" May 14 18:01:37.246925 containerd[1527]: time="2025-05-14T18:01:37.246877032Z" level=info msg="connecting to shim 38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47" address="unix:///run/containerd/s/b2380f2795cb84cbc765f501fee5f2d8bb0fbd572580a36b9f8c6a7b2bcc2913" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:37.274696 systemd[1]: Started cri-containerd-38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47.scope - libcontainer container 38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47. May 14 18:01:37.301353 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:37.331843 containerd[1527]: time="2025-05-14T18:01:37.331739215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tld67,Uid:5361bfb3-0c70-486e-8cd4-3c702e277eea,Namespace:calico-system,Attempt:0,} returns sandbox id \"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47\"" May 14 18:01:37.354501 containerd[1527]: time="2025-05-14T18:01:37.354405952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:01:38.613404 containerd[1527]: time="2025-05-14T18:01:38.613052145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ph2n7,Uid:26612257-e3fe-4e30-ba37-ea09f7734c9b,Namespace:kube-system,Attempt:0,}" May 14 18:01:38.617354 containerd[1527]: time="2025-05-14T18:01:38.617261708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29j4q,Uid:0bf01f1f-2621-4637-a52b-fd8dbc92e2ea,Namespace:kube-system,Attempt:0,}" May 14 18:01:38.646138 containerd[1527]: time="2025-05-14T18:01:38.646078008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 18:01:38.646761 containerd[1527]: time="2025-05-14T18:01:38.646719088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:38.650910 containerd[1527]: time="2025-05-14T18:01:38.650864251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.296362619s" May 14 18:01:38.651342 containerd[1527]: time="2025-05-14T18:01:38.651306771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 18:01:38.651559 containerd[1527]: time="2025-05-14T18:01:38.651036131Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:38.652994 containerd[1527]: time="2025-05-14T18:01:38.652961773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:38.671323 systemd-networkd[1423]: calie94076b5bd3: Gained IPv6LL May 14 18:01:38.671938 containerd[1527]: time="2025-05-14T18:01:38.671751546Z" level=info msg="CreateContainer within sandbox \"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:01:38.681434 containerd[1527]: time="2025-05-14T18:01:38.681381833Z" level=info msg="Container 47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:38.699851 containerd[1527]: time="2025-05-14T18:01:38.699587045Z" level=info msg="CreateContainer within sandbox \"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d\"" May 14 18:01:38.702220 containerd[1527]: time="2025-05-14T18:01:38.700731966Z" level=info msg="StartContainer for \"47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d\"" May 14 18:01:38.702515 containerd[1527]: time="2025-05-14T18:01:38.702486207Z" level=info msg="connecting to shim 47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d" address="unix:///run/containerd/s/b2380f2795cb84cbc765f501fee5f2d8bb0fbd572580a36b9f8c6a7b2bcc2913" protocol=ttrpc version=3 May 14 18:01:38.728436 systemd[1]: Started cri-containerd-47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d.scope - libcontainer container 47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d. May 14 18:01:38.786827 containerd[1527]: time="2025-05-14T18:01:38.786752786Z" level=info msg="StartContainer for \"47de5ae22a6af74813335fc262ba93016718b38756b20d9c704e875fd1c5d80d\" returns successfully" May 14 18:01:38.789341 containerd[1527]: time="2025-05-14T18:01:38.789230108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:01:38.929150 kubelet[2645]: I0514 18:01:38.929114 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:01:38.960559 systemd-networkd[1423]: cali2186f78e1e2: Link UP May 14 18:01:38.962710 systemd-networkd[1423]: cali2186f78e1e2: Gained carrier May 14 18:01:38.985860 containerd[1527]: 2025-05-14 18:01:38.692 [INFO][4147] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:01:38.985860 containerd[1527]: 2025-05-14 18:01:38.716 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--29j4q-eth0 coredns-6f6b679f8f- kube-system 0bf01f1f-2621-4637-a52b-fd8dbc92e2ea 696 0 2025-05-14 18:01:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-29j4q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2186f78e1e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-" May 14 18:01:38.985860 containerd[1527]: 2025-05-14 18:01:38.716 [INFO][4147] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.985860 containerd[1527]: 2025-05-14 18:01:38.785 [INFO][4185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" HandleID="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Workload="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.899 [INFO][4185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" HandleID="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Workload="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000422ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-29j4q", "timestamp":"2025-05-14 18:01:38.785066705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.899 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.900 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.900 [INFO][4185] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.903 [INFO][4185] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" host="localhost" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.911 [INFO][4185] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.920 [INFO][4185] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.923 [INFO][4185] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.926 [INFO][4185] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:38.986718 containerd[1527]: 2025-05-14 18:01:38.926 [INFO][4185] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" host="localhost" May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.928 [INFO][4185] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5 May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.938 [INFO][4185] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" host="localhost" May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.948 [INFO][4185] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" host="localhost" May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.949 [INFO][4185] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" host="localhost" May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.949 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:38.988257 containerd[1527]: 2025-05-14 18:01:38.949 [INFO][4185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" HandleID="k8s-pod-network.117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Workload="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.988393 containerd[1527]: 2025-05-14 18:01:38.955 [INFO][4147] cni-plugin/k8s.go 386: Populated endpoint ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--29j4q-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0bf01f1f-2621-4637-a52b-fd8dbc92e2ea", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-29j4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2186f78e1e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:38.988464 containerd[1527]: 2025-05-14 18:01:38.955 [INFO][4147] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.988464 containerd[1527]: 2025-05-14 18:01:38.955 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2186f78e1e2 ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.988464 containerd[1527]: 2025-05-14 18:01:38.962 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:38.988526 containerd[1527]: 2025-05-14 18:01:38.962 [INFO][4147] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--29j4q-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0bf01f1f-2621-4637-a52b-fd8dbc92e2ea", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5", Pod:"coredns-6f6b679f8f-29j4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2186f78e1e2", MAC:"86:1d:a3:04:76:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:38.988526 containerd[1527]: 2025-05-14 18:01:38.980 [INFO][4147] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" Namespace="kube-system" Pod="coredns-6f6b679f8f-29j4q" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--29j4q-eth0" May 14 18:01:39.025602 containerd[1527]: time="2025-05-14T18:01:39.025553193Z" level=info msg="connecting to shim 117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5" address="unix:///run/containerd/s/6b9b957aaf63497ab399646e0942726b0d3352035d552bfb4164a9ce8a47d662" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:39.054427 systemd[1]: Started cri-containerd-117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5.scope - libcontainer container 117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5. May 14 18:01:39.059331 systemd-networkd[1423]: cali20761546bf4: Link UP May 14 18:01:39.059684 systemd-networkd[1423]: cali20761546bf4: Gained carrier May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.676 [INFO][4135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.699 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0 coredns-6f6b679f8f- kube-system 26612257-e3fe-4e30-ba37-ea09f7734c9b 693 0 2025-05-14 18:01:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-ph2n7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20761546bf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.699 [INFO][4135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.785 [INFO][4178] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" HandleID="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Workload="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.903 [INFO][4178] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" HandleID="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Workload="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000372860), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-ph2n7", "timestamp":"2025-05-14 18:01:38.785289825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.903 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.949 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:38.951 [INFO][4178] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.005 [INFO][4178] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.013 [INFO][4178] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.021 [INFO][4178] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.026 [INFO][4178] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.029 [INFO][4178] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.029 [INFO][4178] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.031 [INFO][4178] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843 May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.042 [INFO][4178] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.051 [INFO][4178] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.051 [INFO][4178] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" host="localhost" May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.051 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:39.084839 containerd[1527]: 2025-05-14 18:01:39.051 [INFO][4178] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" HandleID="k8s-pod-network.9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Workload="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.054 [INFO][4135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"26612257-e3fe-4e30-ba37-ea09f7734c9b", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-ph2n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20761546bf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.054 [INFO][4135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.054 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20761546bf4 ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.059 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.060 [INFO][4135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"26612257-e3fe-4e30-ba37-ea09f7734c9b", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843", Pod:"coredns-6f6b679f8f-ph2n7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20761546bf4", MAC:"96:34:50:7c:73:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:39.085732 containerd[1527]: 2025-05-14 18:01:39.079 [INFO][4135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" Namespace="kube-system" Pod="coredns-6f6b679f8f-ph2n7" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ph2n7-eth0" May 14 18:01:39.090131 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:39.114133 containerd[1527]: time="2025-05-14T18:01:39.114087491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-29j4q,Uid:0bf01f1f-2621-4637-a52b-fd8dbc92e2ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5\"" May 14 18:01:39.118925 containerd[1527]: time="2025-05-14T18:01:39.118852254Z" level=info msg="CreateContainer within sandbox \"117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:01:39.128081 containerd[1527]: time="2025-05-14T18:01:39.127642060Z" level=info msg="connecting to shim 9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843" address="unix:///run/containerd/s/2dcc2ca54dc173bbc9f0714c2101b28c73e562cbede285b2e189eea6b913e832" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:39.137776 containerd[1527]: time="2025-05-14T18:01:39.137737026Z" level=info msg="Container 58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:39.148456 systemd[1]: Started cri-containerd-9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843.scope - libcontainer container 9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843. May 14 18:01:39.153651 containerd[1527]: time="2025-05-14T18:01:39.153595357Z" level=info msg="CreateContainer within sandbox \"117ea7a2c7ea005ce9aa30a4cbf1c9fd89e097c7fa31a0fb4158393d8da3a2f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57\"" May 14 18:01:39.154470 containerd[1527]: time="2025-05-14T18:01:39.154438117Z" level=info msg="StartContainer for \"58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57\"" May 14 18:01:39.155471 containerd[1527]: time="2025-05-14T18:01:39.155374638Z" level=info msg="connecting to shim 58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57" address="unix:///run/containerd/s/6b9b957aaf63497ab399646e0942726b0d3352035d552bfb4164a9ce8a47d662" protocol=ttrpc version=3 May 14 18:01:39.166720 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:39.179521 systemd[1]: Started cri-containerd-58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57.scope - libcontainer container 58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57. May 14 18:01:39.191761 containerd[1527]: time="2025-05-14T18:01:39.191710262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ph2n7,Uid:26612257-e3fe-4e30-ba37-ea09f7734c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843\"" May 14 18:01:39.195351 containerd[1527]: time="2025-05-14T18:01:39.195292664Z" level=info msg="CreateContainer within sandbox \"9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:01:39.210348 containerd[1527]: time="2025-05-14T18:01:39.210235314Z" level=info msg="Container 7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:39.217933 containerd[1527]: time="2025-05-14T18:01:39.217885839Z" level=info msg="StartContainer for \"58a64fa4da5bff4243c00803a7622c45b0945bbf8b2ed63d57cbe7e4a4e35f57\" returns successfully" May 14 18:01:39.218864 containerd[1527]: time="2025-05-14T18:01:39.218820480Z" level=info msg="CreateContainer within sandbox \"9c5cd6891daf6084f34cf1e1ebf11266274d92c6abd47cac608449444d77b843\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8\"" May 14 18:01:39.220918 containerd[1527]: time="2025-05-14T18:01:39.220884641Z" level=info msg="StartContainer for \"7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8\"" May 14 18:01:39.222158 containerd[1527]: time="2025-05-14T18:01:39.222109242Z" level=info msg="connecting to shim 7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8" address="unix:///run/containerd/s/2dcc2ca54dc173bbc9f0714c2101b28c73e562cbede285b2e189eea6b913e832" protocol=ttrpc version=3 May 14 18:01:39.250426 systemd[1]: Started cri-containerd-7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8.scope - libcontainer container 7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8. May 14 18:01:39.287912 containerd[1527]: time="2025-05-14T18:01:39.287869045Z" level=info msg="StartContainer for \"7557e0e0389818183e8314512ba7e44669501cda2b1b365c358819b220e3dac8\" returns successfully" May 14 18:01:39.525063 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:57232.service - OpenSSH per-connection server daemon (10.0.0.1:57232). May 14 18:01:39.601153 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 57232 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:39.603108 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:39.610650 systemd-logind[1513]: New session 9 of user core. May 14 18:01:39.613684 containerd[1527]: time="2025-05-14T18:01:39.613626059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbnsb,Uid:1238464b-10ae-4c67-ade5-0e48aef83d6e,Namespace:calico-apiserver,Attempt:0,}" May 14 18:01:39.622469 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:01:39.821426 kubelet[2645]: I0514 18:01:39.821172 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-29j4q" podStartSLOduration=31.821152275 podStartE2EDuration="31.821152275s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:39.819439714 +0000 UTC m=+37.292141638" watchObservedRunningTime="2025-05-14 18:01:39.821152275 +0000 UTC m=+37.293854199" May 14 18:01:39.828665 sshd[4444]: Connection closed by 10.0.0.1 port 57232 May 14 18:01:39.829428 sshd-session[4425]: pam_unix(sshd:session): session closed for user core May 14 18:01:39.835736 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:57232.service: Deactivated successfully. May 14 18:01:39.837892 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:01:39.839540 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. May 14 18:01:39.841554 systemd-logind[1513]: Removed session 9. May 14 18:01:39.925010 kubelet[2645]: I0514 18:01:39.924735 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ph2n7" podStartSLOduration=31.924711664 podStartE2EDuration="31.924711664s" podCreationTimestamp="2025-05-14 18:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:01:39.902971289 +0000 UTC m=+37.375673213" watchObservedRunningTime="2025-05-14 18:01:39.924711664 +0000 UTC m=+37.397413588" May 14 18:01:39.928089 systemd-networkd[1423]: calica475e14af3: Link UP May 14 18:01:39.928396 systemd-networkd[1423]: calica475e14af3: Gained carrier May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.648 [INFO][4428] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.665 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0 calico-apiserver-84d447bc64- calico-apiserver 1238464b-10ae-4c67-ade5-0e48aef83d6e 701 0 2025-05-14 18:01:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d447bc64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84d447bc64-xbnsb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica475e14af3 [] []}} ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.665 [INFO][4428] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.725 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" HandleID="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.750 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" HandleID="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000262d60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84d447bc64-xbnsb", "timestamp":"2025-05-14 18:01:39.725532493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.750 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.751 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.751 [INFO][4459] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.758 [INFO][4459] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.769 [INFO][4459] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.787 [INFO][4459] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.793 [INFO][4459] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.797 [INFO][4459] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.798 [INFO][4459] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.819 [INFO][4459] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5 May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.875 [INFO][4459] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.910 [INFO][4459] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.912 [INFO][4459] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" host="localhost" May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.912 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:39.956756 containerd[1527]: 2025-05-14 18:01:39.912 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" HandleID="k8s-pod-network.696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.920 [INFO][4428] cni-plugin/k8s.go 386: Populated endpoint ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0", GenerateName:"calico-apiserver-84d447bc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"1238464b-10ae-4c67-ade5-0e48aef83d6e", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d447bc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84d447bc64-xbnsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica475e14af3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.920 [INFO][4428] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.920 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica475e14af3 ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.930 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.930 [INFO][4428] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0", GenerateName:"calico-apiserver-84d447bc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"1238464b-10ae-4c67-ade5-0e48aef83d6e", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d447bc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5", Pod:"calico-apiserver-84d447bc64-xbnsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica475e14af3", MAC:"e6:01:c3:fd:bc:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:39.957355 containerd[1527]: 2025-05-14 18:01:39.945 [INFO][4428] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbnsb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbnsb-eth0" May 14 18:01:40.006042 systemd-networkd[1423]: vxlan.calico: Link UP May 14 18:01:40.006068 systemd-networkd[1423]: vxlan.calico: Gained carrier May 14 18:01:40.026551 containerd[1527]: time="2025-05-14T18:01:40.026496849Z" level=info msg="connecting to shim 696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5" address="unix:///run/containerd/s/7d631a4075515fdf2bf13436e29c497c1f2a3314c1ff44e4ad8033835cb17660" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:40.079293 systemd-networkd[1423]: cali20761546bf4: Gained IPv6LL May 14 18:01:40.080938 systemd-networkd[1423]: cali2186f78e1e2: Gained IPv6LL May 14 18:01:40.098500 systemd[1]: Started cri-containerd-696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5.scope - libcontainer container 696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5. May 14 18:01:40.137015 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:40.196221 containerd[1527]: time="2025-05-14T18:01:40.196147474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbnsb,Uid:1238464b-10ae-4c67-ade5-0e48aef83d6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5\"" May 14 18:01:40.280105 containerd[1527]: time="2025-05-14T18:01:40.280040086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 18:01:40.281380 containerd[1527]: time="2025-05-14T18:01:40.281333686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:40.283168 containerd[1527]: time="2025-05-14T18:01:40.283129287Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:40.284274 containerd[1527]: time="2025-05-14T18:01:40.284229448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:40.285045 containerd[1527]: time="2025-05-14T18:01:40.284998889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.495650101s" May 14 18:01:40.285045 containerd[1527]: time="2025-05-14T18:01:40.285040769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 18:01:40.287112 containerd[1527]: time="2025-05-14T18:01:40.287073250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:01:40.288430 containerd[1527]: time="2025-05-14T18:01:40.288385211Z" level=info msg="CreateContainer within sandbox \"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:01:40.298218 containerd[1527]: time="2025-05-14T18:01:40.297416816Z" level=info msg="Container 52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:40.308958 containerd[1527]: time="2025-05-14T18:01:40.308901903Z" level=info msg="CreateContainer within sandbox \"38df4490d93e211bfae11512e001965bb59fe3473126900f10efe93761745c47\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289\"" May 14 18:01:40.310827 containerd[1527]: time="2025-05-14T18:01:40.309586024Z" level=info msg="StartContainer for \"52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289\"" May 14 18:01:40.311598 containerd[1527]: time="2025-05-14T18:01:40.311562905Z" level=info msg="connecting to shim 52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289" address="unix:///run/containerd/s/b2380f2795cb84cbc765f501fee5f2d8bb0fbd572580a36b9f8c6a7b2bcc2913" protocol=ttrpc version=3 May 14 18:01:40.335608 systemd[1]: Started cri-containerd-52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289.scope - libcontainer container 52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289. May 14 18:01:40.393858 containerd[1527]: time="2025-05-14T18:01:40.393329795Z" level=info msg="StartContainer for \"52a13af1d9200f24661972eb17e9debb3e48bc6bf212120f3c0a55d5667cc289\" returns successfully" May 14 18:01:40.614107 containerd[1527]: time="2025-05-14T18:01:40.613873251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768945bb6-nd72r,Uid:d9760f37-4aa8-4778-a1ce-3cc8769fff10,Namespace:calico-system,Attempt:0,}" May 14 18:01:40.724744 kubelet[2645]: I0514 18:01:40.724677 2645 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:01:40.727375 kubelet[2645]: I0514 18:01:40.727326 2645 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:01:40.816424 kubelet[2645]: I0514 18:01:40.816351 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tld67" podStartSLOduration=22.883754479 podStartE2EDuration="25.816326936s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:37.353721032 +0000 UTC m=+34.826422956" lastFinishedPulling="2025-05-14 18:01:40.286293489 +0000 UTC m=+37.758995413" observedRunningTime="2025-05-14 18:01:40.815038815 +0000 UTC m=+38.287740739" watchObservedRunningTime="2025-05-14 18:01:40.816326936 +0000 UTC m=+38.289028860" May 14 18:01:40.861765 systemd-networkd[1423]: calic0a61e82993: Link UP May 14 18:01:40.863526 systemd-networkd[1423]: calic0a61e82993: Gained carrier May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.675 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0 calico-kube-controllers-768945bb6- calico-system d9760f37-4aa8-4778-a1ce-3cc8769fff10 700 0 2025-05-14 18:01:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:768945bb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-768945bb6-nd72r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic0a61e82993 [] []}} ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.675 [INFO][4694] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.714 [INFO][4708] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" HandleID="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Workload="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.825 [INFO][4708] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" HandleID="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Workload="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314b70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-768945bb6-nd72r", "timestamp":"2025-05-14 18:01:40.714714433 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.826 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.826 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.826 [INFO][4708] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.829 [INFO][4708] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.833 [INFO][4708] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.840 [INFO][4708] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.842 [INFO][4708] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.844 [INFO][4708] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.845 [INFO][4708] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.847 [INFO][4708] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.851 [INFO][4708] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.857 [INFO][4708] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.857 [INFO][4708] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" host="localhost" May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.857 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:40.877893 containerd[1527]: 2025-05-14 18:01:40.857 [INFO][4708] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" HandleID="k8s-pod-network.e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Workload="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.860 [INFO][4694] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0", GenerateName:"calico-kube-controllers-768945bb6-", Namespace:"calico-system", SelfLink:"", UID:"d9760f37-4aa8-4778-a1ce-3cc8769fff10", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768945bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-768945bb6-nd72r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0a61e82993", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.860 [INFO][4694] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.860 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0a61e82993 ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.862 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.863 [INFO][4694] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0", GenerateName:"calico-kube-controllers-768945bb6-", Namespace:"calico-system", SelfLink:"", UID:"d9760f37-4aa8-4778-a1ce-3cc8769fff10", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768945bb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb", Pod:"calico-kube-controllers-768945bb6-nd72r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0a61e82993", MAC:"2a:0f:46:53:09:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:40.878782 containerd[1527]: 2025-05-14 18:01:40.874 [INFO][4694] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" Namespace="calico-system" Pod="calico-kube-controllers-768945bb6-nd72r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768945bb6--nd72r-eth0" May 14 18:01:40.990460 containerd[1527]: time="2025-05-14T18:01:40.990407003Z" level=info msg="connecting to shim e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb" address="unix:///run/containerd/s/8f20baff28c9eca96626fc1ca10e503c1c1fd96c58682ce12515ca8769e26c18" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:41.015414 systemd[1]: Started cri-containerd-e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb.scope - libcontainer container e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb. May 14 18:01:41.027562 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:41.060327 containerd[1527]: time="2025-05-14T18:01:41.060279684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768945bb6-nd72r,Uid:d9760f37-4aa8-4778-a1ce-3cc8769fff10,Namespace:calico-system,Attempt:0,} returns sandbox id \"e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb\"" May 14 18:01:41.425223 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL May 14 18:01:41.612910 containerd[1527]: time="2025-05-14T18:01:41.612872163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbhrh,Uid:908afd2c-b056-4674-b396-0c1c7595ceeb,Namespace:calico-apiserver,Attempt:0,}" May 14 18:01:41.744366 systemd-networkd[1423]: calica475e14af3: Gained IPv6LL May 14 18:01:41.772023 systemd-networkd[1423]: califecbccc54c8: Link UP May 14 18:01:41.772678 systemd-networkd[1423]: califecbccc54c8: Gained carrier May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.675 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0 calico-apiserver-84d447bc64- calico-apiserver 908afd2c-b056-4674-b396-0c1c7595ceeb 699 0 2025-05-14 18:01:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d447bc64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84d447bc64-xbhrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califecbccc54c8 [] []}} ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.675 [INFO][4783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.713 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" HandleID="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.728 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" HandleID="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030b020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84d447bc64-xbhrh", "timestamp":"2025-05-14 18:01:41.713594621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.728 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.728 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.728 [INFO][4801] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.730 [INFO][4801] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.734 [INFO][4801] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.741 [INFO][4801] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.746 [INFO][4801] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.749 [INFO][4801] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.749 [INFO][4801] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.752 [INFO][4801] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7 May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.758 [INFO][4801] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.765 [INFO][4801] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.765 [INFO][4801] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" host="localhost" May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.765 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:01:41.785993 containerd[1527]: 2025-05-14 18:01:41.765 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" HandleID="k8s-pod-network.2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Workload="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.769 [INFO][4783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0", GenerateName:"calico-apiserver-84d447bc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"908afd2c-b056-4674-b396-0c1c7595ceeb", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d447bc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84d447bc64-xbhrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califecbccc54c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.769 [INFO][4783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.769 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califecbccc54c8 ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.771 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.771 [INFO][4783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0", GenerateName:"calico-apiserver-84d447bc64-", Namespace:"calico-apiserver", SelfLink:"", UID:"908afd2c-b056-4674-b396-0c1c7595ceeb", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d447bc64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7", Pod:"calico-apiserver-84d447bc64-xbhrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califecbccc54c8", MAC:"02:27:da:61:7d:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:01:41.787440 containerd[1527]: 2025-05-14 18:01:41.783 [INFO][4783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" Namespace="calico-apiserver" Pod="calico-apiserver-84d447bc64-xbhrh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d447bc64--xbhrh-eth0" May 14 18:01:41.817304 containerd[1527]: time="2025-05-14T18:01:41.817252081Z" level=info msg="connecting to shim 2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7" address="unix:///run/containerd/s/00eba157fe5e4d81539344602cb89483758a3c8f13e2c5afb40005821e69e962" namespace=k8s.io protocol=ttrpc version=3 May 14 18:01:41.851892 systemd[1]: Started cri-containerd-2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7.scope - libcontainer container 2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7. May 14 18:01:41.868540 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:01:41.895023 containerd[1527]: time="2025-05-14T18:01:41.894978166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d447bc64-xbhrh,Uid:908afd2c-b056-4674-b396-0c1c7595ceeb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7\"" May 14 18:01:42.064392 systemd-networkd[1423]: calic0a61e82993: Gained IPv6LL May 14 18:01:42.120167 containerd[1527]: time="2025-05-14T18:01:42.120114212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:42.120590 containerd[1527]: time="2025-05-14T18:01:42.120554412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 18:01:42.121622 containerd[1527]: time="2025-05-14T18:01:42.121577533Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:42.123579 containerd[1527]: time="2025-05-14T18:01:42.123535534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:42.124178 containerd[1527]: time="2025-05-14T18:01:42.124136414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.837019204s" May 14 18:01:42.124178 containerd[1527]: time="2025-05-14T18:01:42.124172774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:01:42.125797 containerd[1527]: time="2025-05-14T18:01:42.125751535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:01:42.126888 containerd[1527]: time="2025-05-14T18:01:42.126853175Z" level=info msg="CreateContainer within sandbox \"696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:01:42.135331 containerd[1527]: time="2025-05-14T18:01:42.135268500Z" level=info msg="Container 36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:42.145532 containerd[1527]: time="2025-05-14T18:01:42.145484186Z" level=info msg="CreateContainer within sandbox \"696c84ccea774bbd59a8e7fcbf88479cf239efb3a16502ab63f5de955c6c95f5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e\"" May 14 18:01:42.146176 containerd[1527]: time="2025-05-14T18:01:42.146088346Z" level=info msg="StartContainer for \"36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e\"" May 14 18:01:42.148115 containerd[1527]: time="2025-05-14T18:01:42.148075507Z" level=info msg="connecting to shim 36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e" address="unix:///run/containerd/s/7d631a4075515fdf2bf13436e29c497c1f2a3314c1ff44e4ad8033835cb17660" protocol=ttrpc version=3 May 14 18:01:42.165393 systemd[1]: Started cri-containerd-36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e.scope - libcontainer container 36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e. May 14 18:01:42.214059 containerd[1527]: time="2025-05-14T18:01:42.213970943Z" level=info msg="StartContainer for \"36e6c0244c865f6b7d0ab4abbab9c567ee4f6b458d4f377cbb8826daeb73d59e\" returns successfully" May 14 18:01:43.151354 systemd-networkd[1423]: califecbccc54c8: Gained IPv6LL May 14 18:01:43.773442 containerd[1527]: time="2025-05-14T18:01:43.773398521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:43.773909 containerd[1527]: time="2025-05-14T18:01:43.773881841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 18:01:43.774849 containerd[1527]: time="2025-05-14T18:01:43.774760962Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:43.777357 containerd[1527]: time="2025-05-14T18:01:43.777216683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:43.778278 containerd[1527]: time="2025-05-14T18:01:43.778240203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.652422468s" May 14 18:01:43.778278 containerd[1527]: time="2025-05-14T18:01:43.778276003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 18:01:43.779578 containerd[1527]: time="2025-05-14T18:01:43.779498644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:01:43.794508 containerd[1527]: time="2025-05-14T18:01:43.794367091Z" level=info msg="CreateContainer within sandbox \"e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:01:43.805856 containerd[1527]: time="2025-05-14T18:01:43.805569697Z" level=info msg="Container 8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:43.818629 containerd[1527]: time="2025-05-14T18:01:43.818586024Z" level=info msg="CreateContainer within sandbox \"e34c2aed609890226696aaa56992f369b506d83a5920a420a6440ffd9e7991bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030\"" May 14 18:01:43.819520 containerd[1527]: time="2025-05-14T18:01:43.819492144Z" level=info msg="StartContainer for \"8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030\"" May 14 18:01:43.820960 containerd[1527]: time="2025-05-14T18:01:43.820917825Z" level=info msg="connecting to shim 8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030" address="unix:///run/containerd/s/8f20baff28c9eca96626fc1ca10e503c1c1fd96c58682ce12515ca8769e26c18" protocol=ttrpc version=3 May 14 18:01:43.845438 systemd[1]: Started cri-containerd-8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030.scope - libcontainer container 8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030. May 14 18:01:43.901379 containerd[1527]: time="2025-05-14T18:01:43.901258186Z" level=info msg="StartContainer for \"8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030\" returns successfully" May 14 18:01:44.091502 containerd[1527]: time="2025-05-14T18:01:44.091381159Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:01:44.092396 containerd[1527]: time="2025-05-14T18:01:44.092341360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:01:44.094364 containerd[1527]: time="2025-05-14T18:01:44.094329921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 314.791117ms" May 14 18:01:44.094447 containerd[1527]: time="2025-05-14T18:01:44.094368401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:01:44.096223 containerd[1527]: time="2025-05-14T18:01:44.096196082Z" level=info msg="CreateContainer within sandbox \"2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:01:44.103258 containerd[1527]: time="2025-05-14T18:01:44.103219725Z" level=info msg="Container 1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:44.113322 containerd[1527]: time="2025-05-14T18:01:44.113270890Z" level=info msg="CreateContainer within sandbox \"2d271f00451a9868865a8fdb4acdd7524eb605765428085afdc1d9b5aa0467f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400\"" May 14 18:01:44.114655 containerd[1527]: time="2025-05-14T18:01:44.114627530Z" level=info msg="StartContainer for \"1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400\"" May 14 18:01:44.115765 containerd[1527]: time="2025-05-14T18:01:44.115738771Z" level=info msg="connecting to shim 1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400" address="unix:///run/containerd/s/00eba157fe5e4d81539344602cb89483758a3c8f13e2c5afb40005821e69e962" protocol=ttrpc version=3 May 14 18:01:44.136576 systemd[1]: Started cri-containerd-1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400.scope - libcontainer container 1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400. May 14 18:01:44.181603 kubelet[2645]: I0514 18:01:44.181420 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84d447bc64-xbnsb" podStartSLOduration=27.254792783 podStartE2EDuration="29.181403242s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:40.198297035 +0000 UTC m=+37.670998959" lastFinishedPulling="2025-05-14 18:01:42.124907494 +0000 UTC m=+39.597609418" observedRunningTime="2025-05-14 18:01:42.822810752 +0000 UTC m=+40.295512676" watchObservedRunningTime="2025-05-14 18:01:44.181403242 +0000 UTC m=+41.654105166" May 14 18:01:44.193329 containerd[1527]: time="2025-05-14T18:01:44.193274248Z" level=info msg="StartContainer for \"1c837ff605822baf8bc2d9f2bb2474f9c175903dc5b3391b7c81734430547400\" returns successfully" May 14 18:01:44.842960 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:39756.service - OpenSSH per-connection server daemon (10.0.0.1:39756). May 14 18:01:44.850670 kubelet[2645]: I0514 18:01:44.850037 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84d447bc64-xbhrh" podStartSLOduration=27.651341006 podStartE2EDuration="29.85001608s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:41.896278487 +0000 UTC m=+39.368980411" lastFinishedPulling="2025-05-14 18:01:44.094953561 +0000 UTC m=+41.567655485" observedRunningTime="2025-05-14 18:01:44.834240713 +0000 UTC m=+42.306942637" watchObservedRunningTime="2025-05-14 18:01:44.85001608 +0000 UTC m=+42.322718004" May 14 18:01:44.851636 kubelet[2645]: I0514 18:01:44.851559 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-768945bb6-nd72r" podStartSLOduration=27.133690202 podStartE2EDuration="29.851544281s" podCreationTimestamp="2025-05-14 18:01:15 +0000 UTC" firstStartedPulling="2025-05-14 18:01:41.061548685 +0000 UTC m=+38.534250569" lastFinishedPulling="2025-05-14 18:01:43.779402724 +0000 UTC m=+41.252104648" observedRunningTime="2025-05-14 18:01:44.850933161 +0000 UTC m=+42.323635085" watchObservedRunningTime="2025-05-14 18:01:44.851544281 +0000 UTC m=+42.324246205" May 14 18:01:44.877398 containerd[1527]: time="2025-05-14T18:01:44.877348533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030\" id:\"003f0e94f7adf5dbd7ab9e7b4ac5be713c698a7cdfc768f71ece9070e4503589\" pid:5005 exited_at:{seconds:1747245704 nanos:876630253}" May 14 18:01:44.923258 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 39756 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:44.925166 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:44.931857 systemd-logind[1513]: New session 10 of user core. May 14 18:01:44.943439 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:01:45.173994 sshd[5019]: Connection closed by 10.0.0.1 port 39756 May 14 18:01:45.174565 sshd-session[4996]: pam_unix(sshd:session): session closed for user core May 14 18:01:45.183060 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:39756.service: Deactivated successfully. May 14 18:01:45.188693 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:01:45.190962 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. May 14 18:01:45.193962 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:39760.service - OpenSSH per-connection server daemon (10.0.0.1:39760). May 14 18:01:45.194932 systemd-logind[1513]: Removed session 10. May 14 18:01:45.244091 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 39760 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:45.245456 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:45.250032 systemd-logind[1513]: New session 11 of user core. May 14 18:01:45.264381 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:01:45.519669 sshd[5035]: Connection closed by 10.0.0.1 port 39760 May 14 18:01:45.521377 sshd-session[5033]: pam_unix(sshd:session): session closed for user core May 14 18:01:45.530750 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:39760.service: Deactivated successfully. May 14 18:01:45.533956 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:01:45.535299 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. May 14 18:01:45.540520 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:39770.service - OpenSSH per-connection server daemon (10.0.0.1:39770). May 14 18:01:45.542573 systemd-logind[1513]: Removed session 11. May 14 18:01:45.594912 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 39770 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:45.596297 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:45.604959 systemd-logind[1513]: New session 12 of user core. May 14 18:01:45.614451 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:01:45.799775 sshd[5055]: Connection closed by 10.0.0.1 port 39770 May 14 18:01:45.800680 sshd-session[5053]: pam_unix(sshd:session): session closed for user core May 14 18:01:45.804592 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:39770.service: Deactivated successfully. May 14 18:01:45.806489 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:01:45.809376 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. May 14 18:01:45.810822 systemd-logind[1513]: Removed session 12. May 14 18:01:45.824058 kubelet[2645]: I0514 18:01:45.824011 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:01:50.813715 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). May 14 18:01:50.869627 sshd[5081]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:50.871369 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:50.877323 systemd-logind[1513]: New session 13 of user core. May 14 18:01:50.883395 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:01:51.022910 sshd[5083]: Connection closed by 10.0.0.1 port 39780 May 14 18:01:51.023448 sshd-session[5081]: pam_unix(sshd:session): session closed for user core May 14 18:01:51.045890 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:39780.service: Deactivated successfully. May 14 18:01:51.048835 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:01:51.050227 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. May 14 18:01:51.053798 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:39784.service - OpenSSH per-connection server daemon (10.0.0.1:39784). May 14 18:01:51.055210 systemd-logind[1513]: Removed session 13. May 14 18:01:51.105824 sshd[5096]: Accepted publickey for core from 10.0.0.1 port 39784 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:51.107394 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:51.111816 systemd-logind[1513]: New session 14 of user core. May 14 18:01:51.124361 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:01:51.340537 sshd[5098]: Connection closed by 10.0.0.1 port 39784 May 14 18:01:51.341055 sshd-session[5096]: pam_unix(sshd:session): session closed for user core May 14 18:01:51.353833 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:39784.service: Deactivated successfully. May 14 18:01:51.356385 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:01:51.357546 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. May 14 18:01:51.360626 systemd-logind[1513]: Removed session 14. May 14 18:01:51.362364 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:39790.service - OpenSSH per-connection server daemon (10.0.0.1:39790). May 14 18:01:51.409546 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 39790 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:51.410810 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:51.415474 systemd-logind[1513]: New session 15 of user core. May 14 18:01:51.423382 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:01:53.008250 sshd[5111]: Connection closed by 10.0.0.1 port 39790 May 14 18:01:53.008690 sshd-session[5109]: pam_unix(sshd:session): session closed for user core May 14 18:01:53.021362 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:39790.service: Deactivated successfully. May 14 18:01:53.023209 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:01:53.026480 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. May 14 18:01:53.031783 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:45180.service - OpenSSH per-connection server daemon (10.0.0.1:45180). May 14 18:01:53.033405 systemd-logind[1513]: Removed session 15. May 14 18:01:53.088032 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 45180 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:53.089471 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:53.093741 systemd-logind[1513]: New session 16 of user core. May 14 18:01:53.100363 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:01:53.395108 sshd[5136]: Connection closed by 10.0.0.1 port 45180 May 14 18:01:53.396506 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 14 18:01:53.404558 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:45180.service: Deactivated successfully. May 14 18:01:53.407576 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:01:53.409056 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. May 14 18:01:53.412458 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). May 14 18:01:53.413000 systemd-logind[1513]: Removed session 16. May 14 18:01:53.464460 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:53.465662 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:53.471439 systemd-logind[1513]: New session 17 of user core. May 14 18:01:53.483391 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:01:53.616583 sshd[5150]: Connection closed by 10.0.0.1 port 45184 May 14 18:01:53.616904 sshd-session[5148]: pam_unix(sshd:session): session closed for user core May 14 18:01:53.620020 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:45184.service: Deactivated successfully. May 14 18:01:53.622109 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:01:53.622994 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. May 14 18:01:53.625068 systemd-logind[1513]: Removed session 17. May 14 18:01:56.572306 containerd[1527]: time="2025-05-14T18:01:56.572251506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a1b86fc11ace6530935200899f00fa3cf7a490d0c3ebee5da09469bec16a030\" id:\"574db67ed72b09618c6447d469a57a7267088301ae15d7da6234418bab8ccc25\" pid:5174 exited_at:{seconds:1747245716 nanos:572016066}" May 14 18:01:58.628844 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:45198.service - OpenSSH per-connection server daemon (10.0.0.1:45198). May 14 18:01:58.669040 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 45198 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:01:58.670274 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:58.674421 systemd-logind[1513]: New session 18 of user core. May 14 18:01:58.680367 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:01:58.790925 sshd[5190]: Connection closed by 10.0.0.1 port 45198 May 14 18:01:58.791264 sshd-session[5188]: pam_unix(sshd:session): session closed for user core May 14 18:01:58.794977 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:45198.service: Deactivated successfully. May 14 18:01:58.796695 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:01:58.797401 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. May 14 18:01:58.798398 systemd-logind[1513]: Removed session 18. May 14 18:02:00.184124 containerd[1527]: time="2025-05-14T18:02:00.184081571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f537ea5da75da5a91110bf95382eb6b8ccc32f970cd01c5b24dd3f5cf44d3577\" id:\"0cca26b366ee40481453217580f0810b9e5949294b0767d53d89634b8bd3f272\" pid:5214 exited_at:{seconds:1747245720 nanos:183574810}" May 14 18:02:03.806688 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). May 14 18:02:03.861690 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:03.863080 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:03.866983 systemd-logind[1513]: New session 19 of user core. May 14 18:02:03.881360 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:02:04.003888 sshd[5239]: Connection closed by 10.0.0.1 port 53974 May 14 18:02:04.004251 sshd-session[5237]: pam_unix(sshd:session): session closed for user core May 14 18:02:04.008596 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:53974.service: Deactivated successfully. May 14 18:02:04.010902 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:02:04.012093 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. May 14 18:02:04.013414 systemd-logind[1513]: Removed session 19. May 14 18:02:09.015490 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). May 14 18:02:09.075305 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:BMeAQICuA2OnIsP+qyp4K3RmZxP3sZUKEyFSi3UEAFA May 14 18:02:09.075891 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:09.080256 systemd-logind[1513]: New session 20 of user core. May 14 18:02:09.089423 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:02:09.219347 sshd[5260]: Connection closed by 10.0.0.1 port 53976 May 14 18:02:09.219655 sshd-session[5258]: pam_unix(sshd:session): session closed for user core May 14 18:02:09.222523 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:53976.service: Deactivated successfully. May 14 18:02:09.224166 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:02:09.225459 systemd-logind[1513]: Session 20 logged out. Waiting for processes to exit. May 14 18:02:09.227741 systemd-logind[1513]: Removed session 20.